2026-03-09T20:12:04.646 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T20:12:04.651 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T20:12:04.670 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/640 branch: squid description: orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} email: null first_in_suite: false flavor: default job_id: '640' last_in_suite: false machine_type: vps name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: client: debug ms: 1 global: mon election default strategy: 1 ms bind msgr2: false ms type: async mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon warn on pool no app: false osd: debug ms: 1 debug osd: 20 osd class default list: '*' osd class load list: '*' osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - reached quota - but it is still running - overall HEALTH_ - \(POOL_FULL\) - \(SMALLER_PGP_NUM\) - \(CACHE_POOL_NO_HIT_SET\) - \(CACHE_POOL_NEAR_FULL\) - \(POOL_APP_NOT_ENABLED\) - \(PG_AVAILABILITY\) - \(PG_DEGRADED\) - CEPHADM_STRAY_DAEMON log-only-match: - CEPHADM_ mon_bind_msgr2: false sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_mode: cephadm-package install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_packages: - cephadm extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - ceph.rgw.foo.a - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b - ceph.iscsi.iscsi.a seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm05.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDZK74MViz971/YF1vgAT3x2zrARR3iZj/GT9Ymu/W+kCoQHlnm8zkWmSM1uAZba2e3kQHle9FPo8a8itjYGfXY= vm09.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBzn6OPVfPEs1Qbwzq0Eytm4MctHORIvJQOp6Zgmr2o35P2+9BNlpBJnmWAyJjYfriBe2uNqjUipy/RDxXGDBmU= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install runc nvmetcli nvme-cli -y - sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf - sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf - install: null - cephadm: conf: mgr: debug mgr: 20 debug ms: 1 - workunit: clients: client.0: - rados/test.sh - rados/test_pool_quota.sh teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-09T20:12:04.670 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T20:12:04.670 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T20:12:04.671 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T20:12:04.671 INFO:teuthology.task.internal:Checking packages... 2026-03-09T20:12:04.671 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T20:12:04.671 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T20:12:04.671 INFO:teuthology.packaging:ref: None 2026-03-09T20:12:04.671 INFO:teuthology.packaging:tag: None 2026-03-09T20:12:04.671 INFO:teuthology.packaging:branch: squid 2026-03-09T20:12:04.671 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:12:04.671 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-09T20:12:05.439 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-09T20:12:05.440 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T20:12:05.441 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T20:12:05.441 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T20:12:05.441 INFO:teuthology.task.internal:Saving configuration 2026-03-09T20:12:05.446 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T20:12:05.446 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T20:12:05.453 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm05.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/640', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 20:10:50.857016', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:05', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDZK74MViz971/YF1vgAT3x2zrARR3iZj/GT9Ymu/W+kCoQHlnm8zkWmSM1uAZba2e3kQHle9FPo8a8itjYGfXY='} 2026-03-09T20:12:05.461 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm09.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/640', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 20:10:50.857719', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:09', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBzn6OPVfPEs1Qbwzq0Eytm4MctHORIvJQOp6Zgmr2o35P2+9BNlpBJnmWAyJjYfriBe2uNqjUipy/RDxXGDBmU='} 2026-03-09T20:12:05.461 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T20:12:05.462 INFO:teuthology.task.internal:roles: ubuntu@vm05.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'ceph.rgw.foo.a', 'node-exporter.a', 'alertmanager.a'] 2026-03-09T20:12:05.462 INFO:teuthology.task.internal:roles: ubuntu@vm09.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b', 'ceph.iscsi.iscsi.a'] 2026-03-09T20:12:05.462 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T20:12:05.469 DEBUG:teuthology.task.console_log:vm05 does not support IPMI; excluding 2026-03-09T20:12:05.477 DEBUG:teuthology.task.console_log:vm09 does not support IPMI; excluding 2026-03-09T20:12:05.478 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fbcb5f7e170>, signals=[15]) 2026-03-09T20:12:05.478 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T20:12:05.479 INFO:teuthology.task.internal:Opening connections... 2026-03-09T20:12:05.479 DEBUG:teuthology.task.internal:connecting to ubuntu@vm05.local 2026-03-09T20:12:05.479 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:12:05.537 DEBUG:teuthology.task.internal:connecting to ubuntu@vm09.local 2026-03-09T20:12:05.537 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:12:05.596 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T20:12:05.598 DEBUG:teuthology.orchestra.run.vm05:> uname -m 2026-03-09T20:12:05.638 INFO:teuthology.orchestra.run.vm05.stdout:x86_64 2026-03-09T20:12:05.638 DEBUG:teuthology.orchestra.run.vm05:> cat /etc/os-release 2026-03-09T20:12:05.694 INFO:teuthology.orchestra.run.vm05.stdout:NAME="CentOS Stream" 2026-03-09T20:12:05.695 INFO:teuthology.orchestra.run.vm05.stdout:VERSION="9" 2026-03-09T20:12:05.695 INFO:teuthology.orchestra.run.vm05.stdout:ID="centos" 2026-03-09T20:12:05.695 INFO:teuthology.orchestra.run.vm05.stdout:ID_LIKE="rhel fedora" 2026-03-09T20:12:05.695 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_ID="9" 2026-03-09T20:12:05.695 INFO:teuthology.orchestra.run.vm05.stdout:PLATFORM_ID="platform:el9" 2026-03-09T20:12:05.695 INFO:teuthology.orchestra.run.vm05.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-09T20:12:05.695 INFO:teuthology.orchestra.run.vm05.stdout:ANSI_COLOR="0;31" 2026-03-09T20:12:05.695 INFO:teuthology.orchestra.run.vm05.stdout:LOGO="fedora-logo-icon" 2026-03-09T20:12:05.695 INFO:teuthology.orchestra.run.vm05.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-09T20:12:05.695 INFO:teuthology.orchestra.run.vm05.stdout:HOME_URL="https://centos.org/" 2026-03-09T20:12:05.695 INFO:teuthology.orchestra.run.vm05.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-09T20:12:05.695 INFO:teuthology.orchestra.run.vm05.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-09T20:12:05.695 INFO:teuthology.orchestra.run.vm05.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-09T20:12:05.695 INFO:teuthology.lock.ops:Updating vm05.local on lock server 2026-03-09T20:12:05.700 DEBUG:teuthology.orchestra.run.vm09:> uname -m 2026-03-09T20:12:05.717 INFO:teuthology.orchestra.run.vm09.stdout:x86_64 2026-03-09T20:12:05.717 DEBUG:teuthology.orchestra.run.vm09:> cat /etc/os-release 2026-03-09T20:12:05.771 INFO:teuthology.orchestra.run.vm09.stdout:NAME="CentOS Stream" 2026-03-09T20:12:05.771 INFO:teuthology.orchestra.run.vm09.stdout:VERSION="9" 2026-03-09T20:12:05.771 INFO:teuthology.orchestra.run.vm09.stdout:ID="centos" 2026-03-09T20:12:05.771 INFO:teuthology.orchestra.run.vm09.stdout:ID_LIKE="rhel fedora" 2026-03-09T20:12:05.771 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_ID="9" 2026-03-09T20:12:05.771 INFO:teuthology.orchestra.run.vm09.stdout:PLATFORM_ID="platform:el9" 2026-03-09T20:12:05.771 INFO:teuthology.orchestra.run.vm09.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-09T20:12:05.771 INFO:teuthology.orchestra.run.vm09.stdout:ANSI_COLOR="0;31" 2026-03-09T20:12:05.771 INFO:teuthology.orchestra.run.vm09.stdout:LOGO="fedora-logo-icon" 2026-03-09T20:12:05.772 INFO:teuthology.orchestra.run.vm09.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-09T20:12:05.772 INFO:teuthology.orchestra.run.vm09.stdout:HOME_URL="https://centos.org/" 2026-03-09T20:12:05.772 INFO:teuthology.orchestra.run.vm09.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-09T20:12:05.772 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-09T20:12:05.772 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-09T20:12:05.772 INFO:teuthology.lock.ops:Updating vm09.local on lock server 2026-03-09T20:12:05.776 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T20:12:05.778 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T20:12:05.779 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T20:12:05.779 DEBUG:teuthology.orchestra.run.vm05:> test '!' -e /home/ubuntu/cephtest 2026-03-09T20:12:05.780 DEBUG:teuthology.orchestra.run.vm09:> test '!' -e /home/ubuntu/cephtest 2026-03-09T20:12:05.826 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T20:12:05.827 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T20:12:05.827 DEBUG:teuthology.orchestra.run.vm05:> test -z $(ls -A /var/lib/ceph) 2026-03-09T20:12:05.835 DEBUG:teuthology.orchestra.run.vm09:> test -z $(ls -A /var/lib/ceph) 2026-03-09T20:12:05.848 INFO:teuthology.orchestra.run.vm05.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T20:12:05.881 INFO:teuthology.orchestra.run.vm09.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T20:12:05.881 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T20:12:05.889 DEBUG:teuthology.orchestra.run.vm05:> test -e /ceph-qa-ready 2026-03-09T20:12:05.902 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:12:06.096 DEBUG:teuthology.orchestra.run.vm09:> test -e /ceph-qa-ready 2026-03-09T20:12:06.111 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:12:06.335 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T20:12:06.337 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T20:12:06.337 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T20:12:06.339 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T20:12:06.356 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T20:12:06.357 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T20:12:06.359 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T20:12:06.359 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T20:12:06.399 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T20:12:06.427 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T20:12:06.429 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T20:12:06.429 DEBUG:teuthology.orchestra.run.vm05:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T20:12:06.482 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:12:06.483 DEBUG:teuthology.orchestra.run.vm09:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T20:12:06.498 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:12:06.498 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T20:12:06.525 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T20:12:06.550 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:12:06.564 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:12:06.565 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:12:06.575 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:12:06.577 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T20:12:06.578 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T20:12:06.578 DEBUG:teuthology.orchestra.run.vm05:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T20:12:06.609 DEBUG:teuthology.orchestra.run.vm09:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T20:12:06.643 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T20:12:06.645 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T20:12:06.645 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T20:12:06.681 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T20:12:06.699 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T20:12:06.774 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T20:12:06.832 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:12:06.832 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T20:12:06.900 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T20:12:06.921 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T20:12:06.977 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T20:12:06.977 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T20:12:07.038 DEBUG:teuthology.orchestra.run.vm05:> sudo service rsyslog restart 2026-03-09T20:12:07.040 DEBUG:teuthology.orchestra.run.vm09:> sudo service rsyslog restart 2026-03-09T20:12:07.069 INFO:teuthology.orchestra.run.vm05.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T20:12:07.107 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T20:12:07.554 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T20:12:07.556 INFO:teuthology.task.internal:Starting timer... 2026-03-09T20:12:07.556 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T20:12:07.558 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T20:12:07.561 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0']} 2026-03-09T20:12:07.561 INFO:teuthology.task.selinux:Excluding vm05: VMs are not yet supported 2026-03-09T20:12:07.561 INFO:teuthology.task.selinux:Excluding vm09: VMs are not yet supported 2026-03-09T20:12:07.561 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T20:12:07.561 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T20:12:07.561 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T20:12:07.561 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T20:12:07.562 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T20:12:07.563 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T20:12:07.564 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T20:12:08.275 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T20:12:08.282 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T20:12:08.282 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryz7dw24c4 --limit vm05.local,vm09.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T20:14:15.448 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm05.local'), Remote(name='ubuntu@vm09.local')] 2026-03-09T20:14:15.448 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm05.local' 2026-03-09T20:14:15.449 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:14:15.517 DEBUG:teuthology.orchestra.run.vm05:> true 2026-03-09T20:14:15.610 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm05.local' 2026-03-09T20:14:15.610 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm09.local' 2026-03-09T20:14:15.610 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:14:15.678 DEBUG:teuthology.orchestra.run.vm09:> true 2026-03-09T20:14:15.757 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm09.local' 2026-03-09T20:14:15.757 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T20:14:15.760 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T20:14:15.760 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T20:14:15.760 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T20:14:15.762 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T20:14:15.762 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T20:14:15.802 INFO:teuthology.orchestra.run.vm05.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-09T20:14:15.825 INFO:teuthology.orchestra.run.vm05.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-09T20:14:15.847 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-09T20:14:15.859 INFO:teuthology.orchestra.run.vm05.stderr:sudo: ntpd: command not found 2026-03-09T20:14:15.868 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-09T20:14:15.872 INFO:teuthology.orchestra.run.vm05.stdout:506 Cannot talk to daemon 2026-03-09T20:14:15.887 INFO:teuthology.orchestra.run.vm05.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-09T20:14:15.899 INFO:teuthology.orchestra.run.vm05.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-09T20:14:15.904 INFO:teuthology.orchestra.run.vm09.stderr:sudo: ntpd: command not found 2026-03-09T20:14:15.918 INFO:teuthology.orchestra.run.vm09.stdout:506 Cannot talk to daemon 2026-03-09T20:14:15.942 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-09T20:14:15.946 INFO:teuthology.orchestra.run.vm05.stderr:bash: line 1: ntpq: command not found 2026-03-09T20:14:15.962 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-09T20:14:15.998 INFO:teuthology.orchestra.run.vm05.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T20:14:15.998 INFO:teuthology.orchestra.run.vm05.stdout:=============================================================================== 2026-03-09T20:14:15.998 INFO:teuthology.orchestra.run.vm05.stdout:^? sonne.floppy.org 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:14:15.998 INFO:teuthology.orchestra.run.vm05.stdout:^? de.relay.mahi.be 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:14:15.998 INFO:teuthology.orchestra.run.vm05.stdout:^? 212.132.108.186 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:14:15.998 INFO:teuthology.orchestra.run.vm05.stdout:^? 185.252.140.125 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:14:16.018 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-09T20:14:16.021 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T20:14:16.022 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-09T20:14:16.022 INFO:teuthology.orchestra.run.vm09.stdout:^? de.relay.mahi.be 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:14:16.022 INFO:teuthology.orchestra.run.vm09.stdout:^? 212.132.108.186 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:14:16.022 INFO:teuthology.orchestra.run.vm09.stdout:^? 185.252.140.125 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:14:16.022 INFO:teuthology.orchestra.run.vm09.stdout:^? sonne.floppy.org 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T20:14:16.022 INFO:teuthology.run_tasks:Running task pexec... 2026-03-09T20:14:16.025 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-09T20:14:16.025 DEBUG:teuthology.orchestra.run.vm05:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-09T20:14:16.025 DEBUG:teuthology.orchestra.run.vm09:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-09T20:14:16.028 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo dnf remove nvme-cli -y 2026-03-09T20:14:16.028 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-09T20:14:16.028 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-09T20:14:16.028 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-09T20:14:16.028 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm09.local 2026-03-09T20:14:16.028 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-09T20:14:16.028 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-09T20:14:16.028 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-09T20:14:16.028 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-09T20:14:16.042 DEBUG:teuthology.task.pexec:ubuntu@vm05.local< sudo dnf remove nvme-cli -y 2026-03-09T20:14:16.042 DEBUG:teuthology.task.pexec:ubuntu@vm05.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-09T20:14:16.042 DEBUG:teuthology.task.pexec:ubuntu@vm05.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-09T20:14:16.042 DEBUG:teuthology.task.pexec:ubuntu@vm05.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-09T20:14:16.042 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm05.local 2026-03-09T20:14:16.042 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-09T20:14:16.042 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-09T20:14:16.042 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-09T20:14:16.042 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-09T20:14:16.281 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: nvme-cli 2026-03-09T20:14:16.282 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:14:16.285 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:14:16.286 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:14:16.286 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:14:16.330 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: nvme-cli 2026-03-09T20:14:16.330 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:14:16.333 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:14:16.334 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:14:16.334 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:14:16.696 INFO:teuthology.orchestra.run.vm09.stdout:Last metadata expiration check: 0:01:08 ago on Mon 09 Mar 2026 08:13:08 PM UTC. 2026-03-09T20:14:16.811 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout:Installing: 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout:Installing dependencies: 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout:Install 7 Packages 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout:Total download size: 6.3 M 2026-03-09T20:14:16.812 INFO:teuthology.orchestra.run.vm09.stdout:Installed size: 24 M 2026-03-09T20:14:16.813 INFO:teuthology.orchestra.run.vm09.stdout:Downloading Packages: 2026-03-09T20:14:16.892 INFO:teuthology.orchestra.run.vm05.stdout:Last metadata expiration check: 0:01:09 ago on Mon 09 Mar 2026 08:13:07 PM UTC. 2026-03-09T20:14:17.023 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout:Installing: 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout:Installing dependencies: 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout:Install 7 Packages 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout:Total download size: 6.3 M 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout:Installed size: 24 M 2026-03-09T20:14:17.024 INFO:teuthology.orchestra.run.vm05.stdout:Downloading Packages: 2026-03-09T20:14:17.191 INFO:teuthology.orchestra.run.vm09.stdout:(1/7): python3-configshell-1.1.30-1.el9.noarch. 529 kB/s | 72 kB 00:00 2026-03-09T20:14:17.270 INFO:teuthology.orchestra.run.vm09.stdout:(2/7): python3-kmod-0.9-32.el9.x86_64.rpm 1.0 MB/s | 84 kB 00:00 2026-03-09T20:14:17.303 INFO:teuthology.orchestra.run.vm09.stdout:(3/7): nvmetcli-0.8-3.el9.noarch.rpm 176 kB/s | 44 kB 00:00 2026-03-09T20:14:17.323 INFO:teuthology.orchestra.run.vm09.stdout:(4/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 2.8 MB/s | 150 kB 00:00 2026-03-09T20:14:17.403 INFO:teuthology.orchestra.run.vm09.stdout:(5/7): nvme-cli-2.16-1.el9.x86_64.rpm 3.3 MB/s | 1.2 MB 00:00 2026-03-09T20:14:17.495 INFO:teuthology.orchestra.run.vm09.stdout:(6/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 4.3 MB/s | 837 kB 00:00 2026-03-09T20:14:17.672 INFO:teuthology.orchestra.run.vm09.stdout:(7/7): runc-1.4.0-2.el9.x86_64.rpm 11 MB/s | 4.0 MB 00:00 2026-03-09T20:14:17.672 INFO:teuthology.orchestra.run.vm09.stdout:-------------------------------------------------------------------------------- 2026-03-09T20:14:17.672 INFO:teuthology.orchestra.run.vm09.stdout:Total 7.3 MB/s | 6.3 MB 00:00 2026-03-09T20:14:17.707 INFO:teuthology.orchestra.run.vm05.stdout:(1/7): nvmetcli-0.8-3.el9.noarch.rpm 437 kB/s | 44 kB 00:00 2026-03-09T20:14:17.716 INFO:teuthology.orchestra.run.vm05.stdout:(2/7): python3-configshell-1.1.30-1.el9.noarch. 656 kB/s | 72 kB 00:00 2026-03-09T20:14:17.759 INFO:teuthology.orchestra.run.vm05.stdout:(3/7): python3-kmod-0.9-32.el9.x86_64.rpm 1.6 MB/s | 84 kB 00:00 2026-03-09T20:14:17.763 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T20:14:17.774 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T20:14:17.774 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T20:14:17.781 INFO:teuthology.orchestra.run.vm05.stdout:(4/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 2.3 MB/s | 150 kB 00:00 2026-03-09T20:14:17.846 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T20:14:17.846 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T20:14:17.875 INFO:teuthology.orchestra.run.vm05.stdout:(5/7): nvme-cli-2.16-1.el9.x86_64.rpm 4.3 MB/s | 1.2 MB 00:00 2026-03-09T20:14:17.895 INFO:teuthology.orchestra.run.vm05.stdout:(6/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 6.0 MB/s | 837 kB 00:00 2026-03-09T20:14:18.044 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T20:14:18.056 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-09T20:14:18.067 INFO:teuthology.orchestra.run.vm05.stdout:(7/7): runc-1.4.0-2.el9.x86_64.rpm 14 MB/s | 4.0 MB 00:00 2026-03-09T20:14:18.068 INFO:teuthology.orchestra.run.vm05.stdout:-------------------------------------------------------------------------------- 2026-03-09T20:14:18.068 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-09T20:14:18.068 INFO:teuthology.orchestra.run.vm05.stdout:Total 6.0 MB/s | 6.3 MB 00:01 2026-03-09T20:14:18.076 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-09T20:14:18.085 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-09T20:14:18.087 INFO:teuthology.orchestra.run.vm09.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-09T20:14:18.145 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-09T20:14:18.196 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T20:14:18.212 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T20:14:18.212 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T20:14:18.302 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T20:14:18.302 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T20:14:18.369 INFO:teuthology.orchestra.run.vm09.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-09T20:14:18.375 INFO:teuthology.orchestra.run.vm09.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-09T20:14:18.535 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T20:14:18.548 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-09T20:14:18.562 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-09T20:14:18.570 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-09T20:14:18.582 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-09T20:14:18.584 INFO:teuthology.orchestra.run.vm05.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-09T20:14:18.640 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-09T20:14:18.770 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-09T20:14:18.770 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T20:14:18.770 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:14:18.808 INFO:teuthology.orchestra.run.vm05.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-09T20:14:18.817 INFO:teuthology.orchestra.run.vm05.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-09T20:14:19.242 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-09T20:14:19.242 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T20:14:19.243 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:14:19.413 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-09T20:14:19.414 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-09T20:14:19.414 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-09T20:14:19.414 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-09T20:14:19.414 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-09T20:14:19.414 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-09T20:14:19.514 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-09T20:14:19.514 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:14:19.514 INFO:teuthology.orchestra.run.vm09.stdout:Installed: 2026-03-09T20:14:19.515 INFO:teuthology.orchestra.run.vm09.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-09T20:14:19.515 INFO:teuthology.orchestra.run.vm09.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-09T20:14:19.515 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-09T20:14:19.515 INFO:teuthology.orchestra.run.vm09.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-09T20:14:19.515 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:14:19.515 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:14:19.659 DEBUG:teuthology.parallel:result is None 2026-03-09T20:14:19.896 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-09T20:14:19.896 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-09T20:14:19.896 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-09T20:14:19.896 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-09T20:14:19.896 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-09T20:14:19.896 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-09T20:14:19.993 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-09T20:14:19.993 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:14:19.993 INFO:teuthology.orchestra.run.vm05.stdout:Installed: 2026-03-09T20:14:19.993 INFO:teuthology.orchestra.run.vm05.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-09T20:14:19.993 INFO:teuthology.orchestra.run.vm05.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-09T20:14:19.993 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-09T20:14:19.993 INFO:teuthology.orchestra.run.vm05.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-09T20:14:19.993 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:14:19.993 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:14:20.116 DEBUG:teuthology.parallel:result is None 2026-03-09T20:14:20.116 INFO:teuthology.run_tasks:Running task install... 2026-03-09T20:14:20.119 DEBUG:teuthology.task.install:project ceph 2026-03-09T20:14:20.119 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_packages': ['cephadm'], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T20:14:20.119 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T20:14:20.119 INFO:teuthology.task.install:Using flavor: default 2026-03-09T20:14:20.122 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-09T20:14:20.122 INFO:teuthology.task.install:extra packages: [] 2026-03-09T20:14:20.123 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-09T20:14:20.123 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:14:20.123 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-09T20:14:20.124 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:14:20.703 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-09T20:14:20.703 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-09T20:14:20.764 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-09T20:14:20.764 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-09T20:14:21.227 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-09T20:14:21.227 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:14:21.227 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-09T20:14:21.258 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-09T20:14:21.258 DEBUG:teuthology.orchestra.run.vm05:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-09T20:14:21.303 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-09T20:14:21.303 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T20:14:21.303 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-09T20:14:21.333 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-09T20:14:21.333 DEBUG:teuthology.orchestra.run.vm09:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-09T20:14:21.344 DEBUG:teuthology.orchestra.run.vm05:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-09T20:14:21.399 DEBUG:teuthology.orchestra.run.vm05:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-09T20:14:21.414 DEBUG:teuthology.orchestra.run.vm09:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-09T20:14:21.432 INFO:teuthology.orchestra.run.vm05.stdout:check_obsoletes = 1 2026-03-09T20:14:21.435 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean all 2026-03-09T20:14:21.500 DEBUG:teuthology.orchestra.run.vm09:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-09T20:14:21.534 INFO:teuthology.orchestra.run.vm09.stdout:check_obsoletes = 1 2026-03-09T20:14:21.536 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean all 2026-03-09T20:14:21.645 INFO:teuthology.orchestra.run.vm05.stdout:41 files removed 2026-03-09T20:14:21.681 DEBUG:teuthology.orchestra.run.vm05:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-09T20:14:21.738 INFO:teuthology.orchestra.run.vm09.stdout:41 files removed 2026-03-09T20:14:21.768 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-09T20:14:23.086 INFO:teuthology.orchestra.run.vm05.stdout:ceph packages for x86_64 71 kB/s | 84 kB 00:01 2026-03-09T20:14:23.144 INFO:teuthology.orchestra.run.vm09.stdout:ceph packages for x86_64 71 kB/s | 84 kB 00:01 2026-03-09T20:14:24.050 INFO:teuthology.orchestra.run.vm05.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-09T20:14:24.127 INFO:teuthology.orchestra.run.vm09.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-09T20:14:25.027 INFO:teuthology.orchestra.run.vm05.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-09T20:14:25.081 INFO:teuthology.orchestra.run.vm09.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-09T20:14:26.767 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - BaseOS 5.2 MB/s | 8.9 MB 00:01 2026-03-09T20:14:31.896 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - BaseOS 1.3 MB/s | 8.9 MB 00:06 2026-03-09T20:14:38.466 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - AppStream 4.6 MB/s | 27 MB 00:05 2026-03-09T20:14:40.965 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - AppStream 2.0 MB/s | 27 MB 00:13 2026-03-09T20:14:42.771 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - CRB 5.4 MB/s | 8.0 MB 00:01 2026-03-09T20:14:43.888 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - Extras packages 82 kB/s | 20 kB 00:00 2026-03-09T20:14:44.559 INFO:teuthology.orchestra.run.vm09.stdout:Extra Packages for Enterprise Linux 35 MB/s | 20 MB 00:00 2026-03-09T20:14:45.737 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - CRB 4.6 MB/s | 8.0 MB 00:01 2026-03-09T20:14:47.242 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - Extras packages 34 kB/s | 20 kB 00:00 2026-03-09T20:14:48.133 INFO:teuthology.orchestra.run.vm05.stdout:Extra Packages for Enterprise Linux 25 MB/s | 20 MB 00:00 2026-03-09T20:14:49.360 INFO:teuthology.orchestra.run.vm09.stdout:lab-extras 63 kB/s | 50 kB 00:00 2026-03-09T20:14:50.788 INFO:teuthology.orchestra.run.vm09.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T20:14:50.788 INFO:teuthology.orchestra.run.vm09.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T20:14:50.792 INFO:teuthology.orchestra.run.vm09.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-09T20:14:50.793 INFO:teuthology.orchestra.run.vm09.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-09T20:14:50.820 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:14:50.825 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout:Installing: 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout:Upgrading: 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout:Installing dependencies: 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-09T20:14:50.826 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-09T20:14:50.827 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-09T20:14:50.828 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout:Installing weak dependencies: 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout:Install 134 Packages 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout:Upgrade 2 Packages 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout:Total download size: 210 M 2026-03-09T20:14:50.829 INFO:teuthology.orchestra.run.vm09.stdout:Downloading Packages: 2026-03-09T20:14:52.531 INFO:teuthology.orchestra.run.vm09.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-09T20:14:52.992 INFO:teuthology.orchestra.run.vm05.stdout:lab-extras 61 kB/s | 50 kB 00:00 2026-03-09T20:14:54.073 INFO:teuthology.orchestra.run.vm09.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 764 kB/s | 1.2 MB 00:01 2026-03-09T20:14:54.307 INFO:teuthology.orchestra.run.vm09.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 622 kB/s | 145 kB 00:00 2026-03-09T20:14:54.501 INFO:teuthology.orchestra.run.vm05.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T20:14:54.502 INFO:teuthology.orchestra.run.vm05.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T20:14:54.506 INFO:teuthology.orchestra.run.vm05.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-09T20:14:54.507 INFO:teuthology.orchestra.run.vm05.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-09T20:14:54.540 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout:====================================================================================== 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout:====================================================================================== 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout:Installing: 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-09T20:14:54.545 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout:Upgrading: 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout:Installing dependencies: 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-09T20:14:54.546 INFO:teuthology.orchestra.run.vm05.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-09T20:14:54.547 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout:Installing weak dependencies: 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout:====================================================================================== 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout:Install 134 Packages 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout:Upgrade 2 Packages 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout:Total download size: 210 M 2026-03-09T20:14:54.548 INFO:teuthology.orchestra.run.vm05.stdout:Downloading Packages: 2026-03-09T20:14:55.696 INFO:teuthology.orchestra.run.vm09.stdout:(4/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 1.7 MB/s | 2.4 MB 00:01 2026-03-09T20:14:55.879 INFO:teuthology.orchestra.run.vm09.stdout:(5/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 1.4 MB/s | 5.5 MB 00:03 2026-03-09T20:14:56.202 INFO:teuthology.orchestra.run.vm09.stdout:(6/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 2.1 MB/s | 1.1 MB 00:00 2026-03-09T20:14:56.262 INFO:teuthology.orchestra.run.vm05.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-09T20:14:57.135 INFO:teuthology.orchestra.run.vm09.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 3.8 MB/s | 4.7 MB 00:01 2026-03-09T20:14:57.748 INFO:teuthology.orchestra.run.vm05.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 793 kB/s | 1.2 MB 00:01 2026-03-09T20:14:58.038 INFO:teuthology.orchestra.run.vm05.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 503 kB/s | 145 kB 00:00 2026-03-09T20:14:58.525 INFO:teuthology.orchestra.run.vm09.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 3.4 MB/s | 22 MB 00:06 2026-03-09T20:14:58.635 INFO:teuthology.orchestra.run.vm09.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 229 kB/s | 25 kB 00:00 2026-03-09T20:14:58.749 INFO:teuthology.orchestra.run.vm09.stdout:(10/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 6.7 MB/s | 11 MB 00:01 2026-03-09T20:14:58.868 INFO:teuthology.orchestra.run.vm09.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 285 kB/s | 34 kB 00:00 2026-03-09T20:14:58.993 INFO:teuthology.orchestra.run.vm09.stdout:(12/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 7.8 MB/s | 1.0 MB 00:00 2026-03-09T20:14:59.108 INFO:teuthology.orchestra.run.vm09.stdout:(13/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.4 MB/s | 163 kB 00:00 2026-03-09T20:14:59.250 INFO:teuthology.orchestra.run.vm09.stdout:(14/136): librados-devel-19.2.3-678.ge911bdeb.e 893 kB/s | 127 kB 00:00 2026-03-09T20:14:59.419 INFO:teuthology.orchestra.run.vm05.stdout:(4/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 1.8 MB/s | 2.4 MB 00:01 2026-03-09T20:14:59.438 INFO:teuthology.orchestra.run.vm09.stdout:(15/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 5.3 MB/s | 17 MB 00:03 2026-03-09T20:14:59.441 INFO:teuthology.orchestra.run.vm09.stdout:(16/136): libradosstriper1-19.2.3-678.ge911bdeb 2.6 MB/s | 503 kB 00:00 2026-03-09T20:14:59.562 INFO:teuthology.orchestra.run.vm09.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 375 kB/s | 45 kB 00:00 2026-03-09T20:14:59.677 INFO:teuthology.orchestra.run.vm09.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.2 MB/s | 142 kB 00:00 2026-03-09T20:14:59.794 INFO:teuthology.orchestra.run.vm09.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.4 MB/s | 165 kB 00:00 2026-03-09T20:14:59.879 INFO:teuthology.orchestra.run.vm05.stdout:(5/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 2.3 MB/s | 1.1 MB 00:00 2026-03-09T20:14:59.917 INFO:teuthology.orchestra.run.vm09.stdout:(20/136): python3-rados-19.2.3-678.ge911bdeb.el 2.6 MB/s | 323 kB 00:00 2026-03-09T20:15:00.107 INFO:teuthology.orchestra.run.vm09.stdout:(21/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 8.1 MB/s | 5.4 MB 00:00 2026-03-09T20:15:00.144 INFO:teuthology.orchestra.run.vm09.stdout:(22/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 1.3 MB/s | 303 kB 00:00 2026-03-09T20:15:00.257 INFO:teuthology.orchestra.run.vm09.stdout:(23/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 750 kB/s | 85 kB 00:00 2026-03-09T20:15:00.259 INFO:teuthology.orchestra.run.vm09.stdout:(24/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 660 kB/s | 100 kB 00:00 2026-03-09T20:15:00.456 INFO:teuthology.orchestra.run.vm09.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 869 kB/s | 171 kB 00:00 2026-03-09T20:15:00.576 INFO:teuthology.orchestra.run.vm09.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 261 kB/s | 31 kB 00:00 2026-03-09T20:15:00.621 INFO:teuthology.orchestra.run.vm09.stdout:(27/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 8.6 MB/s | 3.1 MB 00:00 2026-03-09T20:15:00.717 INFO:teuthology.orchestra.run.vm09.stdout:(28/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.0 MB/s | 150 kB 00:00 2026-03-09T20:15:00.717 INFO:teuthology.orchestra.run.vm05.stdout:(6/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 1.1 MB/s | 5.5 MB 00:04 2026-03-09T20:15:00.996 INFO:teuthology.orchestra.run.vm09.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 10 MB/s | 3.8 MB 00:00 2026-03-09T20:15:01.116 INFO:teuthology.orchestra.run.vm09.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 2.1 MB/s | 253 kB 00:00 2026-03-09T20:15:01.229 INFO:teuthology.orchestra.run.vm09.stdout:(31/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 436 kB/s | 49 kB 00:00 2026-03-09T20:15:01.261 INFO:teuthology.orchestra.run.vm05.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 3.4 MB/s | 4.7 MB 00:01 2026-03-09T20:15:01.348 INFO:teuthology.orchestra.run.vm09.stdout:(32/136): ceph-prometheus-alerts-19.2.3-678.ge9 141 kB/s | 17 kB 00:00 2026-03-09T20:15:01.417 INFO:teuthology.orchestra.run.vm09.stdout:(33/136): ceph-mgr-diskprediction-local-19.2.3- 11 MB/s | 7.4 MB 00:00 2026-03-09T20:15:01.464 INFO:teuthology.orchestra.run.vm09.stdout:(34/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.5 MB/s | 299 kB 00:00 2026-03-09T20:15:01.634 INFO:teuthology.orchestra.run.vm09.stdout:(35/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 3.5 MB/s | 769 kB 00:00 2026-03-09T20:15:01.733 INFO:teuthology.orchestra.run.vm09.stdout:(36/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.3 MB/s | 351 kB 00:00 2026-03-09T20:15:01.811 INFO:teuthology.orchestra.run.vm09.stdout:(37/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 228 kB/s | 40 kB 00:00 2026-03-09T20:15:01.812 INFO:teuthology.orchestra.run.vm09.stdout:(38/136): libconfig-1.7.2-9.el9.x86_64.rpm 917 kB/s | 72 kB 00:00 2026-03-09T20:15:01.851 INFO:teuthology.orchestra.run.vm09.stdout:(39/136): libquadmath-11.5.0-14.el9.x86_64.rpm 4.7 MB/s | 184 kB 00:00 2026-03-09T20:15:01.876 INFO:teuthology.orchestra.run.vm09.stdout:(40/136): mailcap-2.1.49-5.el9.noarch.rpm 1.3 MB/s | 33 kB 00:00 2026-03-09T20:15:01.911 INFO:teuthology.orchestra.run.vm09.stdout:(41/136): pciutils-3.7.0-7.el9.x86_64.rpm 2.6 MB/s | 93 kB 00:00 2026-03-09T20:15:01.940 INFO:teuthology.orchestra.run.vm09.stdout:(42/136): libgfortran-11.5.0-14.el9.x86_64.rpm 6.0 MB/s | 794 kB 00:00 2026-03-09T20:15:01.955 INFO:teuthology.orchestra.run.vm09.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 5.6 MB/s | 253 kB 00:00 2026-03-09T20:15:01.987 INFO:teuthology.orchestra.run.vm09.stdout:(44/136): python3-ply-3.11-14.el9.noarch.rpm 3.3 MB/s | 106 kB 00:00 2026-03-09T20:15:01.996 INFO:teuthology.orchestra.run.vm09.stdout:(45/136): python3-cryptography-36.0.1-5.el9.x86 22 MB/s | 1.2 MB 00:00 2026-03-09T20:15:02.020 INFO:teuthology.orchestra.run.vm09.stdout:(46/136): python3-pycparser-2.20-6.el9.noarch.r 3.9 MB/s | 135 kB 00:00 2026-03-09T20:15:02.021 INFO:teuthology.orchestra.run.vm09.stdout:(47/136): python3-requests-2.25.1-10.el9.noarch 5.0 MB/s | 126 kB 00:00 2026-03-09T20:15:02.065 INFO:teuthology.orchestra.run.vm09.stdout:(48/136): unzip-6.0-59.el9.x86_64.rpm 4.1 MB/s | 182 kB 00:00 2026-03-09T20:15:02.081 INFO:teuthology.orchestra.run.vm09.stdout:(49/136): python3-urllib3-1.26.5-7.el9.noarch.r 3.5 MB/s | 218 kB 00:00 2026-03-09T20:15:02.096 INFO:teuthology.orchestra.run.vm09.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 8.7 MB/s | 266 kB 00:00 2026-03-09T20:15:02.200 INFO:teuthology.orchestra.run.vm09.stdout:(51/136): flexiblas-3.0.4-9.el9.x86_64.rpm 284 kB/s | 30 kB 00:00 2026-03-09T20:15:02.243 INFO:teuthology.orchestra.run.vm09.stdout:(52/136): boost-program-options-1.75.0-13.el9.x 646 kB/s | 104 kB 00:00 2026-03-09T20:15:02.286 INFO:teuthology.orchestra.run.vm09.stdout:(53/136): flexiblas-openblas-openmp-3.0.4-9.el9 349 kB/s | 15 kB 00:00 2026-03-09T20:15:02.331 INFO:teuthology.orchestra.run.vm05.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 3.3 MB/s | 22 MB 00:06 2026-03-09T20:15:02.407 INFO:teuthology.orchestra.run.vm09.stdout:(54/136): libnbd-1.20.3-4.el9.x86_64.rpm 1.3 MB/s | 164 kB 00:00 2026-03-09T20:15:02.447 INFO:teuthology.orchestra.run.vm05.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 217 kB/s | 25 kB 00:00 2026-03-09T20:15:02.496 INFO:teuthology.orchestra.run.vm09.stdout:(55/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.8 MB/s | 160 kB 00:00 2026-03-09T20:15:02.540 INFO:teuthology.orchestra.run.vm09.stdout:(56/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 1.0 MB/s | 45 kB 00:00 2026-03-09T20:15:02.778 INFO:teuthology.orchestra.run.vm09.stdout:(57/136): librdkafka-1.6.1-102.el9.x86_64.rpm 2.7 MB/s | 662 kB 00:00 2026-03-09T20:15:02.844 INFO:teuthology.orchestra.run.vm09.stdout:(58/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 3.7 MB/s | 246 kB 00:00 2026-03-09T20:15:02.945 INFO:teuthology.orchestra.run.vm05.stdout:(10/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 6.4 MB/s | 11 MB 00:01 2026-03-09T20:15:02.945 INFO:teuthology.orchestra.run.vm09.stdout:(59/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 4.0 MB/s | 3.0 MB 00:00 2026-03-09T20:15:02.946 INFO:teuthology.orchestra.run.vm09.stdout:(60/136): libxslt-1.1.34-12.el9.x86_64.rpm 2.2 MB/s | 233 kB 00:00 2026-03-09T20:15:03.034 INFO:teuthology.orchestra.run.vm09.stdout:(61/136): lua-5.4.4-4.el9.x86_64.rpm 2.1 MB/s | 188 kB 00:00 2026-03-09T20:15:03.044 INFO:teuthology.orchestra.run.vm09.stdout:(62/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 2.9 MB/s | 292 kB 00:00 2026-03-09T20:15:03.059 INFO:teuthology.orchestra.run.vm05.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 294 kB/s | 34 kB 00:00 2026-03-09T20:15:03.080 INFO:teuthology.orchestra.run.vm09.stdout:(63/136): openblas-0.3.29-1.el9.x86_64.rpm 923 kB/s | 42 kB 00:00 2026-03-09T20:15:03.273 INFO:teuthology.orchestra.run.vm05.stdout:(12/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 4.6 MB/s | 1.0 MB 00:00 2026-03-09T20:15:03.283 INFO:teuthology.orchestra.run.vm09.stdout:(64/136): protobuf-3.14.0-17.el9.x86_64.rpm 4.9 MB/s | 1.0 MB 00:00 2026-03-09T20:15:03.993 INFO:teuthology.orchestra.run.vm05.stdout:(13/136): libcephsqlite-19.2.3-678.ge911bdeb.el 226 kB/s | 163 kB 00:00 2026-03-09T20:15:04.205 INFO:teuthology.orchestra.run.vm09.stdout:(65/136): python3-babel-2.9.1-2.el9.noarch.rpm 6.5 MB/s | 6.0 MB 00:00 2026-03-09T20:15:04.206 INFO:teuthology.orchestra.run.vm05.stdout:(14/136): librados-devel-19.2.3-678.ge911bdeb.e 595 kB/s | 127 kB 00:00 2026-03-09T20:15:04.272 INFO:teuthology.orchestra.run.vm09.stdout:(66/136): python3-devel-3.9.25-3.el9.x86_64.rpm 3.6 MB/s | 244 kB 00:00 2026-03-09T20:15:04.326 INFO:teuthology.orchestra.run.vm05.stdout:(15/136): libradosstriper1-19.2.3-678.ge911bdeb 4.1 MB/s | 503 kB 00:00 2026-03-09T20:15:04.353 INFO:teuthology.orchestra.run.vm09.stdout:(67/136): python3-jinja2-2.11.3-8.el9.noarch.rp 3.0 MB/s | 249 kB 00:00 2026-03-09T20:15:04.424 INFO:teuthology.orchestra.run.vm09.stdout:(68/136): python3-jmespath-1.0.1-1.el9.noarch.r 675 kB/s | 48 kB 00:00 2026-03-09T20:15:04.519 INFO:teuthology.orchestra.run.vm09.stdout:(69/136): python3-libstoragemgmt-1.10.1-1.el9.x 1.8 MB/s | 177 kB 00:00 2026-03-09T20:15:04.580 INFO:teuthology.orchestra.run.vm09.stdout:(70/136): python3-mako-1.1.4-6.el9.noarch.rpm 2.8 MB/s | 172 kB 00:00 2026-03-09T20:15:04.636 INFO:teuthology.orchestra.run.vm09.stdout:(71/136): python3-markupsafe-1.1.1-12.el9.x86_6 628 kB/s | 35 kB 00:00 2026-03-09T20:15:04.776 INFO:teuthology.orchestra.run.vm09.stdout:(72/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 8.1 MB/s | 50 MB 00:06 2026-03-09T20:15:04.838 INFO:teuthology.orchestra.run.vm09.stdout:(73/136): openblas-openmp-0.3.29-1.el9.x86_64.r 2.9 MB/s | 5.3 MB 00:01 2026-03-09T20:15:04.920 INFO:teuthology.orchestra.run.vm09.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 943 kB/s | 77 kB 00:00 2026-03-09T20:15:05.036 INFO:teuthology.orchestra.run.vm09.stdout:(75/136): python3-protobuf-3.14.0-17.el9.noarch 2.3 MB/s | 267 kB 00:00 2026-03-09T20:15:05.183 INFO:teuthology.orchestra.run.vm09.stdout:(76/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 1.1 MB/s | 442 kB 00:00 2026-03-09T20:15:05.197 INFO:teuthology.orchestra.run.vm05.stdout:(16/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 6.2 MB/s | 5.4 MB 00:00 2026-03-09T20:15:05.216 INFO:teuthology.orchestra.run.vm09.stdout:(77/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 878 kB/s | 157 kB 00:00 2026-03-09T20:15:05.330 INFO:teuthology.orchestra.run.vm09.stdout:(78/136): python3-pyasn1-modules-0.4.8-7.el9.no 1.9 MB/s | 277 kB 00:00 2026-03-09T20:15:05.330 INFO:teuthology.orchestra.run.vm05.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 337 kB/s | 45 kB 00:00 2026-03-09T20:15:05.331 INFO:teuthology.orchestra.run.vm09.stdout:(79/136): python3-requests-oauthlib-1.3.0-12.el 468 kB/s | 54 kB 00:00 2026-03-09T20:15:05.401 INFO:teuthology.orchestra.run.vm09.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 599 kB/s | 42 kB 00:00 2026-03-09T20:15:05.446 INFO:teuthology.orchestra.run.vm05.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.2 MB/s | 142 kB 00:00 2026-03-09T20:15:05.491 INFO:teuthology.orchestra.run.vm09.stdout:(81/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 7.2 MB/s | 6.1 MB 00:00 2026-03-09T20:15:05.521 INFO:teuthology.orchestra.run.vm09.stdout:(82/136): qatlib-25.08.0-2.el9.x86_64.rpm 1.9 MB/s | 240 kB 00:00 2026-03-09T20:15:05.536 INFO:teuthology.orchestra.run.vm09.stdout:(83/136): qatlib-service-25.08.0-2.el9.x86_64.r 839 kB/s | 37 kB 00:00 2026-03-09T20:15:05.562 INFO:teuthology.orchestra.run.vm05.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.4 MB/s | 165 kB 00:00 2026-03-09T20:15:05.590 INFO:teuthology.orchestra.run.vm09.stdout:(84/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 968 kB/s | 66 kB 00:00 2026-03-09T20:15:05.618 INFO:teuthology.orchestra.run.vm09.stdout:(85/136): socat-1.7.4.1-8.el9.x86_64.rpm 3.6 MB/s | 303 kB 00:00 2026-03-09T20:15:05.646 INFO:teuthology.orchestra.run.vm09.stdout:(86/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.1 MB/s | 64 kB 00:00 2026-03-09T20:15:05.680 INFO:teuthology.orchestra.run.vm05.stdout:(20/136): python3-rados-19.2.3-678.ge911bdeb.el 2.7 MB/s | 323 kB 00:00 2026-03-09T20:15:05.797 INFO:teuthology.orchestra.run.vm05.stdout:(21/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.5 MB/s | 303 kB 00:00 2026-03-09T20:15:05.912 INFO:teuthology.orchestra.run.vm05.stdout:(22/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 871 kB/s | 100 kB 00:00 2026-03-09T20:15:05.914 INFO:teuthology.orchestra.run.vm09.stdout:(87/136): lua-devel-5.4.4-4.el9.x86_64.rpm 75 kB/s | 22 kB 00:00 2026-03-09T20:15:05.935 INFO:teuthology.orchestra.run.vm09.stdout:(88/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 27 MB/s | 551 kB 00:00 2026-03-09T20:15:05.944 INFO:teuthology.orchestra.run.vm09.stdout:(89/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 35 MB/s | 308 kB 00:00 2026-03-09T20:15:05.948 INFO:teuthology.orchestra.run.vm09.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 4.3 MB/s | 19 kB 00:00 2026-03-09T20:15:06.027 INFO:teuthology.orchestra.run.vm05.stdout:(23/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 742 kB/s | 85 kB 00:00 2026-03-09T20:15:06.034 INFO:teuthology.orchestra.run.vm09.stdout:(91/136): libarrow-9.0.0-15.el9.x86_64.rpm 52 MB/s | 4.4 MB 00:00 2026-03-09T20:15:06.037 INFO:teuthology.orchestra.run.vm09.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 9.9 MB/s | 25 kB 00:00 2026-03-09T20:15:06.040 INFO:teuthology.orchestra.run.vm09.stdout:(93/136): liboath-2.6.12-1.el9.x86_64.rpm 18 MB/s | 49 kB 00:00 2026-03-09T20:15:06.043 INFO:teuthology.orchestra.run.vm09.stdout:(94/136): libunwind-1.6.2-1.el9.x86_64.rpm 22 MB/s | 67 kB 00:00 2026-03-09T20:15:06.048 INFO:teuthology.orchestra.run.vm09.stdout:(95/136): luarocks-3.9.2-5.el9.noarch.rpm 36 MB/s | 151 kB 00:00 2026-03-09T20:15:06.065 INFO:teuthology.orchestra.run.vm09.stdout:(96/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 48 MB/s | 838 kB 00:00 2026-03-09T20:15:06.081 INFO:teuthology.orchestra.run.vm09.stdout:(97/136): python3-asyncssh-2.13.2-5.el9.noarch. 37 MB/s | 548 kB 00:00 2026-03-09T20:15:06.083 INFO:teuthology.orchestra.run.vm09.stdout:(98/136): python3-autocommand-2.2.2-8.el9.noarc 12 MB/s | 29 kB 00:00 2026-03-09T20:15:06.086 INFO:teuthology.orchestra.run.vm09.stdout:(99/136): python3-backports-tarfile-1.2.0-1.el9 23 MB/s | 60 kB 00:00 2026-03-09T20:15:06.090 INFO:teuthology.orchestra.run.vm09.stdout:(100/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 10 MB/s | 43 kB 00:00 2026-03-09T20:15:06.093 INFO:teuthology.orchestra.run.vm09.stdout:(101/136): python3-cachetools-4.2.4-1.el9.noarc 14 MB/s | 32 kB 00:00 2026-03-09T20:15:06.096 INFO:teuthology.orchestra.run.vm09.stdout:(102/136): python3-certifi-2023.05.07-4.el9.noa 5.7 MB/s | 14 kB 00:00 2026-03-09T20:15:06.101 INFO:teuthology.orchestra.run.vm09.stdout:(103/136): python3-cheroot-10.0.1-4.el9.noarch. 34 MB/s | 173 kB 00:00 2026-03-09T20:15:06.108 INFO:teuthology.orchestra.run.vm09.stdout:(104/136): python3-cherrypy-18.6.1-2.el9.noarch 52 MB/s | 358 kB 00:00 2026-03-09T20:15:06.114 INFO:teuthology.orchestra.run.vm09.stdout:(105/136): python3-google-auth-2.45.0-1.el9.noa 45 MB/s | 254 kB 00:00 2026-03-09T20:15:06.155 INFO:teuthology.orchestra.run.vm09.stdout:(106/136): python3-grpcio-1.46.7-10.el9.x86_64. 51 MB/s | 2.0 MB 00:00 2026-03-09T20:15:06.177 INFO:teuthology.orchestra.run.vm09.stdout:(107/136): python3-grpcio-tools-1.46.7-10.el9.x 6.6 MB/s | 144 kB 00:00 2026-03-09T20:15:06.188 INFO:teuthology.orchestra.run.vm09.stdout:(108/136): python3-jaraco-8.2.1-3.el9.noarch.rp 988 kB/s | 11 kB 00:00 2026-03-09T20:15:06.191 INFO:teuthology.orchestra.run.vm09.stdout:(109/136): python3-jaraco-classes-3.2.1-5.el9.n 7.0 MB/s | 18 kB 00:00 2026-03-09T20:15:06.194 INFO:teuthology.orchestra.run.vm09.stdout:(110/136): python3-jaraco-collections-3.0.0-8.e 9.7 MB/s | 23 kB 00:00 2026-03-09T20:15:06.198 INFO:teuthology.orchestra.run.vm09.stdout:(111/136): python3-jaraco-context-6.0.1-3.el9.n 5.0 MB/s | 20 kB 00:00 2026-03-09T20:15:06.202 INFO:teuthology.orchestra.run.vm09.stdout:(112/136): python3-jaraco-functools-3.5.0-2.el9 4.8 MB/s | 19 kB 00:00 2026-03-09T20:15:06.205 INFO:teuthology.orchestra.run.vm09.stdout:(113/136): python3-jaraco-text-4.0.0-2.el9.noar 11 MB/s | 26 kB 00:00 2026-03-09T20:15:06.220 INFO:teuthology.orchestra.run.vm09.stdout:(114/136): python3-kubernetes-26.1.0-3.el9.noar 70 MB/s | 1.0 MB 00:00 2026-03-09T20:15:06.223 INFO:teuthology.orchestra.run.vm09.stdout:(115/136): python3-logutils-0.3.5-21.el9.noarch 16 MB/s | 46 kB 00:00 2026-03-09T20:15:06.231 INFO:teuthology.orchestra.run.vm09.stdout:(116/136): python3-more-itertools-8.12.0-2.el9. 10 MB/s | 79 kB 00:00 2026-03-09T20:15:06.234 INFO:teuthology.orchestra.run.vm09.stdout:(117/136): python3-natsort-7.1.1-5.el9.noarch.r 20 MB/s | 58 kB 00:00 2026-03-09T20:15:06.241 INFO:teuthology.orchestra.run.vm09.stdout:(118/136): python3-pecan-1.4.2-3.el9.noarch.rpm 39 MB/s | 272 kB 00:00 2026-03-09T20:15:06.246 INFO:teuthology.orchestra.run.vm09.stdout:(119/136): python3-portend-3.1.0-2.el9.noarch.r 3.9 MB/s | 16 kB 00:00 2026-03-09T20:15:06.251 INFO:teuthology.orchestra.run.vm09.stdout:(120/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 21 MB/s | 90 kB 00:00 2026-03-09T20:15:06.254 INFO:teuthology.orchestra.run.vm09.stdout:(121/136): python3-repoze-lru-0.7-16.el9.noarch 8.9 MB/s | 31 kB 00:00 2026-03-09T20:15:06.259 INFO:teuthology.orchestra.run.vm09.stdout:(122/136): python3-routes-2.5.1-5.el9.noarch.rp 40 MB/s | 188 kB 00:00 2026-03-09T20:15:06.262 INFO:teuthology.orchestra.run.vm09.stdout:(123/136): python3-rsa-4.9-2.el9.noarch.rpm 20 MB/s | 59 kB 00:00 2026-03-09T20:15:06.265 INFO:teuthology.orchestra.run.vm09.stdout:(124/136): python3-tempora-5.0.0-2.el9.noarch.r 15 MB/s | 36 kB 00:00 2026-03-09T20:15:06.268 INFO:teuthology.orchestra.run.vm09.stdout:(125/136): python3-typing-extensions-4.15.0-1.e 29 MB/s | 86 kB 00:00 2026-03-09T20:15:06.278 INFO:teuthology.orchestra.run.vm09.stdout:(126/136): python3-webob-1.8.8-2.el9.noarch.rpm 24 MB/s | 230 kB 00:00 2026-03-09T20:15:06.284 INFO:teuthology.orchestra.run.vm09.stdout:(127/136): python3-websocket-client-1.2.3-2.el9 18 MB/s | 90 kB 00:00 2026-03-09T20:15:06.292 INFO:teuthology.orchestra.run.vm09.stdout:(128/136): python3-werkzeug-2.0.3-3.el9.1.noarc 50 MB/s | 427 kB 00:00 2026-03-09T20:15:06.295 INFO:teuthology.orchestra.run.vm09.stdout:(129/136): python3-xmltodict-0.12.0-15.el9.noar 8.8 MB/s | 22 kB 00:00 2026-03-09T20:15:06.299 INFO:teuthology.orchestra.run.vm09.stdout:(130/136): python3-zc-lockfile-2.0-10.el9.noarc 6.1 MB/s | 20 kB 00:00 2026-03-09T20:15:06.303 INFO:teuthology.orchestra.run.vm09.stdout:(131/136): re2-20211101-20.el9.x86_64.rpm 37 MB/s | 191 kB 00:00 2026-03-09T20:15:06.324 INFO:teuthology.orchestra.run.vm09.stdout:(132/136): protobuf-compiler-3.14.0-17.el9.x86_ 1.2 MB/s | 862 kB 00:00 2026-03-09T20:15:06.329 INFO:teuthology.orchestra.run.vm09.stdout:(133/136): thrift-0.15.0-4.el9.x86_64.rpm 64 MB/s | 1.6 MB 00:00 2026-03-09T20:15:06.622 INFO:teuthology.orchestra.run.vm05.stdout:(24/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 5.2 MB/s | 3.1 MB 00:00 2026-03-09T20:15:06.759 INFO:teuthology.orchestra.run.vm05.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.2 MB/s | 171 kB 00:00 2026-03-09T20:15:06.843 INFO:teuthology.orchestra.run.vm09.stdout:(134/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 13 MB/s | 19 MB 00:01 2026-03-09T20:15:06.873 INFO:teuthology.orchestra.run.vm05.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 275 kB/s | 31 kB 00:00 2026-03-09T20:15:06.988 INFO:teuthology.orchestra.run.vm05.stdout:(27/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.3 MB/s | 150 kB 00:00 2026-03-09T20:15:07.343 INFO:teuthology.orchestra.run.vm09.stdout:(135/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 3.1 MB/s | 3.2 MB 00:01 2026-03-09T20:15:07.373 INFO:teuthology.orchestra.run.vm09.stdout:(136/136): librados2-19.2.3-678.ge911bdeb.el9.x 3.3 MB/s | 3.4 MB 00:01 2026-03-09T20:15:07.376 INFO:teuthology.orchestra.run.vm09.stdout:-------------------------------------------------------------------------------- 2026-03-09T20:15:07.377 INFO:teuthology.orchestra.run.vm09.stdout:Total 13 MB/s | 210 MB 00:16 2026-03-09T20:15:07.785 INFO:teuthology.orchestra.run.vm05.stdout:(28/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 4.8 MB/s | 3.8 MB 00:00 2026-03-09T20:15:08.034 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T20:15:08.087 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T20:15:08.087 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T20:15:08.954 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T20:15:08.954 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T20:15:09.148 INFO:teuthology.orchestra.run.vm05.stdout:(29/136): ceph-mgr-diskprediction-local-19.2.3- 5.4 MB/s | 7.4 MB 00:01 2026-03-09T20:15:09.499 INFO:teuthology.orchestra.run.vm05.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 721 kB/s | 253 kB 00:00 2026-03-09T20:15:09.613 INFO:teuthology.orchestra.run.vm05.stdout:(31/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 433 kB/s | 49 kB 00:00 2026-03-09T20:15:09.726 INFO:teuthology.orchestra.run.vm05.stdout:(32/136): ceph-prometheus-alerts-19.2.3-678.ge9 148 kB/s | 17 kB 00:00 2026-03-09T20:15:09.844 INFO:teuthology.orchestra.run.vm05.stdout:(33/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.5 MB/s | 299 kB 00:00 2026-03-09T20:15:09.909 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T20:15:09.930 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-09T20:15:09.944 INFO:teuthology.orchestra.run.vm09.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-09T20:15:10.073 INFO:teuthology.orchestra.run.vm05.stdout:(34/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 3.3 MB/s | 769 kB 00:00 2026-03-09T20:15:10.119 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-09T20:15:10.121 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-09T20:15:10.189 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-09T20:15:10.191 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-09T20:15:10.223 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-09T20:15:10.234 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-09T20:15:10.239 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-09T20:15:10.241 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-09T20:15:10.246 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-09T20:15:10.257 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-09T20:15:10.258 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-09T20:15:10.298 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-09T20:15:10.300 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-09T20:15:10.319 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-09T20:15:10.355 INFO:teuthology.orchestra.run.vm09.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-09T20:15:10.389 INFO:teuthology.orchestra.run.vm05.stdout:(35/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.1 MB/s | 351 kB 00:00 2026-03-09T20:15:10.398 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-09T20:15:10.403 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-09T20:15:10.422 INFO:teuthology.orchestra.run.vm05.stdout:(36/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 1.2 MB/s | 40 kB 00:00 2026-03-09T20:15:10.433 INFO:teuthology.orchestra.run.vm09.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-09T20:15:10.448 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-09T20:15:10.456 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-09T20:15:10.468 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-09T20:15:10.486 INFO:teuthology.orchestra.run.vm09.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-09T20:15:10.492 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-09T20:15:10.500 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-09T20:15:10.570 INFO:teuthology.orchestra.run.vm05.stdout:(37/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 6.1 MB/s | 50 MB 00:08 2026-03-09T20:15:10.571 INFO:teuthology.orchestra.run.vm05.stdout:(38/136): libconfig-1.7.2-9.el9.x86_64.rpm 482 kB/s | 72 kB 00:00 2026-03-09T20:15:10.595 INFO:teuthology.orchestra.run.vm09.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-09T20:15:10.614 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-09T20:15:10.619 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-09T20:15:10.628 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-09T20:15:10.631 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-09T20:15:10.674 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-09T20:15:10.682 INFO:teuthology.orchestra.run.vm05.stdout:(39/136): libquadmath-11.5.0-14.el9.x86_64.rpm 1.6 MB/s | 184 kB 00:00 2026-03-09T20:15:10.682 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-09T20:15:10.694 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-09T20:15:10.714 INFO:teuthology.orchestra.run.vm05.stdout:(40/136): mailcap-2.1.49-5.el9.noarch.rpm 1.0 MB/s | 33 kB 00:00 2026-03-09T20:15:10.724 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-09T20:15:10.733 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-09T20:15:10.764 INFO:teuthology.orchestra.run.vm09.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-09T20:15:10.771 INFO:teuthology.orchestra.run.vm09.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-09T20:15:10.777 INFO:teuthology.orchestra.run.vm05.stdout:(41/136): pciutils-3.7.0-7.el9.x86_64.rpm 1.5 MB/s | 93 kB 00:00 2026-03-09T20:15:10.780 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-09T20:15:10.813 INFO:teuthology.orchestra.run.vm09.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-09T20:15:10.877 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-09T20:15:10.894 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-09T20:15:10.901 INFO:teuthology.orchestra.run.vm05.stdout:(42/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 2.0 MB/s | 253 kB 00:00 2026-03-09T20:15:10.904 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-09T20:15:10.913 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-09T20:15:10.949 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-09T20:15:10.954 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-09T20:15:10.979 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-09T20:15:11.007 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-09T20:15:11.014 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-09T20:15:11.023 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-09T20:15:11.040 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-09T20:15:11.055 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-09T20:15:11.069 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-09T20:15:11.116 INFO:teuthology.orchestra.run.vm05.stdout:(43/136): libgfortran-11.5.0-14.el9.x86_64.rpm 1.4 MB/s | 794 kB 00:00 2026-03-09T20:15:11.140 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-09T20:15:11.150 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-09T20:15:11.161 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-09T20:15:11.179 INFO:teuthology.orchestra.run.vm05.stdout:(44/136): python3-ply-3.11-14.el9.noarch.rpm 1.7 MB/s | 106 kB 00:00 2026-03-09T20:15:11.216 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-09T20:15:11.243 INFO:teuthology.orchestra.run.vm05.stdout:(45/136): python3-pycparser-2.20-6.el9.noarch.r 2.1 MB/s | 135 kB 00:00 2026-03-09T20:15:11.320 INFO:teuthology.orchestra.run.vm05.stdout:(46/136): python3-requests-2.25.1-10.el9.noarch 1.6 MB/s | 126 kB 00:00 2026-03-09T20:15:11.414 INFO:teuthology.orchestra.run.vm05.stdout:(47/136): python3-urllib3-1.26.5-7.el9.noarch.r 2.3 MB/s | 218 kB 00:00 2026-03-09T20:15:11.545 INFO:teuthology.orchestra.run.vm05.stdout:(48/136): unzip-6.0-59.el9.x86_64.rpm 1.4 MB/s | 182 kB 00:00 2026-03-09T20:15:11.642 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-09T20:15:11.664 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-09T20:15:11.671 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-09T20:15:11.679 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-09T20:15:11.684 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-09T20:15:11.693 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-09T20:15:11.723 INFO:teuthology.orchestra.run.vm05.stdout:(49/136): python3-cryptography-36.0.1-5.el9.x86 1.5 MB/s | 1.2 MB 00:00 2026-03-09T20:15:11.725 INFO:teuthology.orchestra.run.vm09.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-09T20:15:11.728 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-09T20:15:11.732 INFO:teuthology.orchestra.run.vm05.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 1.4 MB/s | 266 kB 00:00 2026-03-09T20:15:11.761 INFO:teuthology.orchestra.run.vm09.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-09T20:15:11.821 INFO:teuthology.orchestra.run.vm09.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-09T20:15:11.834 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-09T20:15:11.844 INFO:teuthology.orchestra.run.vm09.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-09T20:15:11.850 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-09T20:15:11.858 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-09T20:15:11.864 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-09T20:15:11.922 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-09T20:15:11.951 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-09T20:15:11.988 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-09T20:15:12.005 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-09T20:15:12.050 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-09T20:15:12.343 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-09T20:15:12.527 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-09T20:15:12.563 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-09T20:15:12.627 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-09T20:15:12.629 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-09T20:15:12.655 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-09T20:15:13.145 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-09T20:15:13.176 INFO:teuthology.orchestra.run.vm05.stdout:(51/136): flexiblas-3.0.4-9.el9.x86_64.rpm 20 kB/s | 30 kB 00:01 2026-03-09T20:15:13.233 INFO:teuthology.orchestra.run.vm05.stdout:(52/136): boost-program-options-1.75.0-13.el9.x 69 kB/s | 104 kB 00:01 2026-03-09T20:15:13.240 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-09T20:15:13.264 INFO:teuthology.orchestra.run.vm05.stdout:(53/136): flexiblas-openblas-openmp-3.0.4-9.el9 497 kB/s | 15 kB 00:00 2026-03-09T20:15:13.323 INFO:teuthology.orchestra.run.vm05.stdout:(54/136): libnbd-1.20.3-4.el9.x86_64.rpm 2.7 MB/s | 164 kB 00:00 2026-03-09T20:15:13.355 INFO:teuthology.orchestra.run.vm05.stdout:(55/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 5.0 MB/s | 160 kB 00:00 2026-03-09T20:15:13.408 INFO:teuthology.orchestra.run.vm05.stdout:(56/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 1.3 MB/s | 17 MB 00:12 2026-03-09T20:15:13.409 INFO:teuthology.orchestra.run.vm05.stdout:(57/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 837 kB/s | 45 kB 00:00 2026-03-09T20:15:13.420 INFO:teuthology.orchestra.run.vm05.stdout:(58/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 12 MB/s | 3.0 MB 00:00 2026-03-09T20:15:13.442 INFO:teuthology.orchestra.run.vm05.stdout:(59/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 7.5 MB/s | 246 kB 00:00 2026-03-09T20:15:13.454 INFO:teuthology.orchestra.run.vm05.stdout:(60/136): libxslt-1.1.34-12.el9.x86_64.rpm 6.8 MB/s | 233 kB 00:00 2026-03-09T20:15:13.476 INFO:teuthology.orchestra.run.vm05.stdout:(61/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 8.5 MB/s | 292 kB 00:00 2026-03-09T20:15:13.486 INFO:teuthology.orchestra.run.vm05.stdout:(62/136): lua-5.4.4-4.el9.x86_64.rpm 5.9 MB/s | 188 kB 00:00 2026-03-09T20:15:13.506 INFO:teuthology.orchestra.run.vm05.stdout:(63/136): openblas-0.3.29-1.el9.x86_64.rpm 1.4 MB/s | 42 kB 00:00 2026-03-09T20:15:13.571 INFO:teuthology.orchestra.run.vm05.stdout:(64/136): protobuf-3.14.0-17.el9.x86_64.rpm 16 MB/s | 1.0 MB 00:00 2026-03-09T20:15:13.768 INFO:teuthology.orchestra.run.vm05.stdout:(65/136): openblas-openmp-0.3.29-1.el9.x86_64.r 19 MB/s | 5.3 MB 00:00 2026-03-09T20:15:13.801 INFO:teuthology.orchestra.run.vm05.stdout:(66/136): python3-devel-3.9.25-3.el9.x86_64.rpm 7.4 MB/s | 244 kB 00:00 2026-03-09T20:15:13.833 INFO:teuthology.orchestra.run.vm05.stdout:(67/136): librdkafka-1.6.1-102.el9.x86_64.rpm 1.5 MB/s | 662 kB 00:00 2026-03-09T20:15:13.835 INFO:teuthology.orchestra.run.vm05.stdout:(68/136): python3-jinja2-2.11.3-8.el9.noarch.rp 7.2 MB/s | 249 kB 00:00 2026-03-09T20:15:13.874 INFO:teuthology.orchestra.run.vm05.stdout:(69/136): python3-babel-2.9.1-2.el9.noarch.rpm 20 MB/s | 6.0 MB 00:00 2026-03-09T20:15:13.874 INFO:teuthology.orchestra.run.vm05.stdout:(70/136): python3-jmespath-1.0.1-1.el9.noarch.r 1.1 MB/s | 48 kB 00:00 2026-03-09T20:15:13.876 INFO:teuthology.orchestra.run.vm05.stdout:(71/136): python3-libstoragemgmt-1.10.1-1.el9.x 4.3 MB/s | 177 kB 00:00 2026-03-09T20:15:13.906 INFO:teuthology.orchestra.run.vm05.stdout:(72/136): python3-mako-1.1.4-6.el9.noarch.rpm 5.3 MB/s | 172 kB 00:00 2026-03-09T20:15:13.907 INFO:teuthology.orchestra.run.vm05.stdout:(73/136): python3-markupsafe-1.1.1-12.el9.x86_6 1.0 MB/s | 35 kB 00:00 2026-03-09T20:15:13.972 INFO:teuthology.orchestra.run.vm05.stdout:(74/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 6.6 MB/s | 442 kB 00:00 2026-03-09T20:15:13.973 INFO:teuthology.orchestra.run.vm05.stdout:(75/136): python3-packaging-20.9-5.el9.noarch.r 1.2 MB/s | 77 kB 00:00 2026-03-09T20:15:14.005 INFO:teuthology.orchestra.run.vm05.stdout:(76/136): python3-protobuf-3.14.0-17.el9.noarch 8.0 MB/s | 267 kB 00:00 2026-03-09T20:15:14.006 INFO:teuthology.orchestra.run.vm05.stdout:(77/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 4.7 MB/s | 157 kB 00:00 2026-03-09T20:15:14.039 INFO:teuthology.orchestra.run.vm05.stdout:(78/136): python3-pyasn1-modules-0.4.8-7.el9.no 8.3 MB/s | 277 kB 00:00 2026-03-09T20:15:14.039 INFO:teuthology.orchestra.run.vm05.stdout:(79/136): python3-requests-oauthlib-1.3.0-12.el 1.6 MB/s | 54 kB 00:00 2026-03-09T20:15:14.075 INFO:teuthology.orchestra.run.vm05.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 1.2 MB/s | 42 kB 00:00 2026-03-09T20:15:14.100 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-09T20:15:14.107 INFO:teuthology.orchestra.run.vm05.stdout:(81/136): qatlib-25.08.0-2.el9.x86_64.rpm 7.3 MB/s | 240 kB 00:00 2026-03-09T20:15:14.129 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-09T20:15:14.136 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-09T20:15:14.137 INFO:teuthology.orchestra.run.vm05.stdout:(82/136): qatlib-service-25.08.0-2.el9.x86_64.r 1.2 MB/s | 37 kB 00:00 2026-03-09T20:15:14.141 INFO:teuthology.orchestra.run.vm09.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-09T20:15:14.169 INFO:teuthology.orchestra.run.vm05.stdout:(83/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 2.1 MB/s | 66 kB 00:00 2026-03-09T20:15:14.202 INFO:teuthology.orchestra.run.vm05.stdout:(84/136): socat-1.7.4.1-8.el9.x86_64.rpm 9.1 MB/s | 303 kB 00:00 2026-03-09T20:15:14.232 INFO:teuthology.orchestra.run.vm05.stdout:(85/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 2.1 MB/s | 64 kB 00:00 2026-03-09T20:15:14.266 INFO:teuthology.orchestra.run.vm05.stdout:(86/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 16 MB/s | 6.1 MB 00:00 2026-03-09T20:15:14.313 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-09T20:15:14.390 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-09T20:15:14.440 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-09T20:15:14.446 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-09T20:15:14.456 INFO:teuthology.orchestra.run.vm09.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-09T20:15:14.591 INFO:teuthology.orchestra.run.vm05.stdout:(87/136): lua-devel-5.4.4-4.el9.x86_64.rpm 62 kB/s | 22 kB 00:00 2026-03-09T20:15:14.611 INFO:teuthology.orchestra.run.vm05.stdout:(88/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 27 MB/s | 551 kB 00:00 2026-03-09T20:15:14.623 INFO:teuthology.orchestra.run.vm05.stdout:(89/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 26 MB/s | 308 kB 00:00 2026-03-09T20:15:14.625 INFO:teuthology.orchestra.run.vm05.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 8.9 MB/s | 19 kB 00:00 2026-03-09T20:15:14.698 INFO:teuthology.orchestra.run.vm05.stdout:(91/136): libarrow-9.0.0-15.el9.x86_64.rpm 61 MB/s | 4.4 MB 00:00 2026-03-09T20:15:14.700 INFO:teuthology.orchestra.run.vm05.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 9.7 MB/s | 25 kB 00:00 2026-03-09T20:15:14.704 INFO:teuthology.orchestra.run.vm05.stdout:(93/136): liboath-2.6.12-1.el9.x86_64.rpm 16 MB/s | 49 kB 00:00 2026-03-09T20:15:14.707 INFO:teuthology.orchestra.run.vm05.stdout:(94/136): libunwind-1.6.2-1.el9.x86_64.rpm 23 MB/s | 67 kB 00:00 2026-03-09T20:15:14.711 INFO:teuthology.orchestra.run.vm05.stdout:(95/136): luarocks-3.9.2-5.el9.noarch.rpm 33 MB/s | 151 kB 00:00 2026-03-09T20:15:14.724 INFO:teuthology.orchestra.run.vm05.stdout:(96/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 69 MB/s | 838 kB 00:00 2026-03-09T20:15:14.733 INFO:teuthology.orchestra.run.vm05.stdout:(97/136): python3-asyncssh-2.13.2-5.el9.noarch. 63 MB/s | 548 kB 00:00 2026-03-09T20:15:14.735 INFO:teuthology.orchestra.run.vm05.stdout:(98/136): python3-autocommand-2.2.2-8.el9.noarc 12 MB/s | 29 kB 00:00 2026-03-09T20:15:14.736 INFO:teuthology.orchestra.run.vm09.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-09T20:15:14.738 INFO:teuthology.orchestra.run.vm05.stdout:(99/136): python3-backports-tarfile-1.2.0-1.el9 21 MB/s | 60 kB 00:00 2026-03-09T20:15:14.739 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-09T20:15:14.741 INFO:teuthology.orchestra.run.vm05.stdout:(100/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 19 MB/s | 43 kB 00:00 2026-03-09T20:15:14.743 INFO:teuthology.orchestra.run.vm05.stdout:(101/136): python3-cachetools-4.2.4-1.el9.noarc 14 MB/s | 32 kB 00:00 2026-03-09T20:15:14.745 INFO:teuthology.orchestra.run.vm05.stdout:(102/136): python3-certifi-2023.05.07-4.el9.noa 7.4 MB/s | 14 kB 00:00 2026-03-09T20:15:14.749 INFO:teuthology.orchestra.run.vm05.stdout:(103/136): python3-cheroot-10.0.1-4.el9.noarch. 44 MB/s | 173 kB 00:00 2026-03-09T20:15:14.755 INFO:teuthology.orchestra.run.vm05.stdout:(104/136): python3-cherrypy-18.6.1-2.el9.noarch 59 MB/s | 358 kB 00:00 2026-03-09T20:15:14.761 INFO:teuthology.orchestra.run.vm05.stdout:(105/136): python3-google-auth-2.45.0-1.el9.noa 52 MB/s | 254 kB 00:00 2026-03-09T20:15:14.763 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-09T20:15:14.776 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-09T20:15:14.792 INFO:teuthology.orchestra.run.vm05.stdout:(106/136): python3-grpcio-1.46.7-10.el9.x86_64. 66 MB/s | 2.0 MB 00:00 2026-03-09T20:15:14.797 INFO:teuthology.orchestra.run.vm05.stdout:(107/136): python3-grpcio-tools-1.46.7-10.el9.x 33 MB/s | 144 kB 00:00 2026-03-09T20:15:14.799 INFO:teuthology.orchestra.run.vm05.stdout:(108/136): python3-jaraco-8.2.1-3.el9.noarch.rp 5.4 MB/s | 11 kB 00:00 2026-03-09T20:15:14.801 INFO:teuthology.orchestra.run.vm05.stdout:(109/136): python3-jaraco-classes-3.2.1-5.el9.n 8.1 MB/s | 18 kB 00:00 2026-03-09T20:15:14.804 INFO:teuthology.orchestra.run.vm05.stdout:(110/136): python3-jaraco-collections-3.0.0-8.e 11 MB/s | 23 kB 00:00 2026-03-09T20:15:14.806 INFO:teuthology.orchestra.run.vm05.stdout:(111/136): python3-jaraco-context-6.0.1-3.el9.n 9.9 MB/s | 20 kB 00:00 2026-03-09T20:15:14.808 INFO:teuthology.orchestra.run.vm05.stdout:(112/136): python3-jaraco-functools-3.5.0-2.el9 8.8 MB/s | 19 kB 00:00 2026-03-09T20:15:14.810 INFO:teuthology.orchestra.run.vm05.stdout:(113/136): python3-jaraco-text-4.0.0-2.el9.noar 11 MB/s | 26 kB 00:00 2026-03-09T20:15:14.828 INFO:teuthology.orchestra.run.vm05.stdout:(114/136): python3-kubernetes-26.1.0-3.el9.noar 60 MB/s | 1.0 MB 00:00 2026-03-09T20:15:14.831 INFO:teuthology.orchestra.run.vm05.stdout:(115/136): python3-logutils-0.3.5-21.el9.noarch 18 MB/s | 46 kB 00:00 2026-03-09T20:15:14.834 INFO:teuthology.orchestra.run.vm05.stdout:(116/136): python3-more-itertools-8.12.0-2.el9. 29 MB/s | 79 kB 00:00 2026-03-09T20:15:14.837 INFO:teuthology.orchestra.run.vm05.stdout:(117/136): python3-natsort-7.1.1-5.el9.noarch.r 19 MB/s | 58 kB 00:00 2026-03-09T20:15:14.842 INFO:teuthology.orchestra.run.vm05.stdout:(118/136): protobuf-compiler-3.14.0-17.el9.x86_ 1.5 MB/s | 862 kB 00:00 2026-03-09T20:15:14.843 INFO:teuthology.orchestra.run.vm05.stdout:(119/136): python3-pecan-1.4.2-3.el9.noarch.rpm 44 MB/s | 272 kB 00:00 2026-03-09T20:15:14.846 INFO:teuthology.orchestra.run.vm05.stdout:(120/136): python3-portend-3.1.0-2.el9.noarch.r 3.7 MB/s | 16 kB 00:00 2026-03-09T20:15:14.848 INFO:teuthology.orchestra.run.vm05.stdout:(121/136): python3-repoze-lru-0.7-16.el9.noarch 15 MB/s | 31 kB 00:00 2026-03-09T20:15:14.850 INFO:teuthology.orchestra.run.vm05.stdout:(122/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 14 MB/s | 90 kB 00:00 2026-03-09T20:15:14.855 INFO:teuthology.orchestra.run.vm05.stdout:(123/136): python3-rsa-4.9-2.el9.noarch.rpm 13 MB/s | 59 kB 00:00 2026-03-09T20:15:14.857 INFO:teuthology.orchestra.run.vm05.stdout:(124/136): python3-routes-2.5.1-5.el9.noarch.rp 24 MB/s | 188 kB 00:00 2026-03-09T20:15:14.858 INFO:teuthology.orchestra.run.vm05.stdout:(125/136): python3-tempora-5.0.0-2.el9.noarch.r 12 MB/s | 36 kB 00:00 2026-03-09T20:15:14.864 INFO:teuthology.orchestra.run.vm05.stdout:(126/136): python3-typing-extensions-4.15.0-1.e 13 MB/s | 86 kB 00:00 2026-03-09T20:15:14.865 INFO:teuthology.orchestra.run.vm05.stdout:(127/136): python3-webob-1.8.8-2.el9.noarch.rpm 33 MB/s | 230 kB 00:00 2026-03-09T20:15:14.867 INFO:teuthology.orchestra.run.vm05.stdout:(128/136): python3-websocket-client-1.2.3-2.el9 24 MB/s | 90 kB 00:00 2026-03-09T20:15:14.894 INFO:teuthology.orchestra.run.vm05.stdout:(129/136): python3-xmltodict-0.12.0-15.el9.noar 867 kB/s | 22 kB 00:00 2026-03-09T20:15:14.915 INFO:teuthology.orchestra.run.vm05.stdout:(130/136): python3-werkzeug-2.0.3-3.el9.1.noarc 8.4 MB/s | 427 kB 00:00 2026-03-09T20:15:15.001 INFO:teuthology.orchestra.run.vm05.stdout:(131/136): python3-zc-lockfile-2.0-10.el9.noarc 187 kB/s | 20 kB 00:00 2026-03-09T20:15:15.015 INFO:teuthology.orchestra.run.vm05.stdout:(132/136): re2-20211101-20.el9.x86_64.rpm 1.9 MB/s | 191 kB 00:00 2026-03-09T20:15:15.066 INFO:teuthology.orchestra.run.vm05.stdout:(133/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 19 MB/s | 19 MB 00:01 2026-03-09T20:15:15.089 INFO:teuthology.orchestra.run.vm05.stdout:(134/136): thrift-0.15.0-4.el9.x86_64.rpm 18 MB/s | 1.6 MB 00:00 2026-03-09T20:15:15.976 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-09T20:15:16.119 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-09T20:15:16.146 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-09T20:15:16.152 INFO:teuthology.orchestra.run.vm05.stdout:(135/136): librados2-19.2.3-678.ge911bdeb.el9.x 3.0 MB/s | 3.4 MB 00:01 2026-03-09T20:15:16.169 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-09T20:15:16.192 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-09T20:15:16.290 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-09T20:15:16.305 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-09T20:15:16.335 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-09T20:15:16.400 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-09T20:15:16.469 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-09T20:15:16.502 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-09T20:15:16.508 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-09T20:15:16.514 INFO:teuthology.orchestra.run.vm09.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-09T20:15:16.519 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-09T20:15:16.574 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-09T20:15:16.592 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-09T20:15:16.674 INFO:teuthology.orchestra.run.vm05.stdout:(136/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 2.0 MB/s | 3.2 MB 00:01 2026-03-09T20:15:16.676 INFO:teuthology.orchestra.run.vm05.stdout:-------------------------------------------------------------------------------- 2026-03-09T20:15:16.676 INFO:teuthology.orchestra.run.vm05.stdout:Total 9.5 MB/s | 210 MB 00:22 2026-03-09T20:15:17.071 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-09T20:15:17.077 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-09T20:15:17.118 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-09T20:15:17.118 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-09T20:15:17.118 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-09T20:15:17.118 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:17.124 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-09T20:15:17.334 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T20:15:17.388 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T20:15:17.388 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T20:15:18.248 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T20:15:18.248 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T20:15:19.188 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T20:15:19.203 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-09T20:15:19.217 INFO:teuthology.orchestra.run.vm05.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-09T20:15:19.441 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-09T20:15:19.443 INFO:teuthology.orchestra.run.vm05.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-09T20:15:19.507 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-09T20:15:19.510 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-09T20:15:19.545 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-09T20:15:19.555 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-09T20:15:19.560 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-09T20:15:19.562 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-09T20:15:19.567 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-09T20:15:19.577 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-09T20:15:19.579 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-09T20:15:19.618 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-09T20:15:19.621 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-09T20:15:19.716 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-09T20:15:19.756 INFO:teuthology.orchestra.run.vm05.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-09T20:15:19.797 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-09T20:15:19.804 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-09T20:15:19.831 INFO:teuthology.orchestra.run.vm05.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-09T20:15:19.850 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-09T20:15:19.860 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-09T20:15:19.872 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-09T20:15:19.879 INFO:teuthology.orchestra.run.vm05.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-09T20:15:19.884 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-09T20:15:19.892 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-09T20:15:19.922 INFO:teuthology.orchestra.run.vm05.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-09T20:15:19.941 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-09T20:15:19.947 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-09T20:15:19.955 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-09T20:15:19.958 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-09T20:15:19.992 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-09T20:15:19.998 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-09T20:15:20.010 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-09T20:15:20.025 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-09T20:15:20.080 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-09T20:15:20.112 INFO:teuthology.orchestra.run.vm05.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-09T20:15:20.118 INFO:teuthology.orchestra.run.vm05.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-09T20:15:20.127 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-09T20:15:20.168 INFO:teuthology.orchestra.run.vm05.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-09T20:15:20.233 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-09T20:15:20.254 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-09T20:15:20.262 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-09T20:15:20.272 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-09T20:15:20.282 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-09T20:15:20.287 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-09T20:15:20.306 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-09T20:15:20.333 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-09T20:15:20.341 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-09T20:15:20.348 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-09T20:15:20.362 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-09T20:15:20.375 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-09T20:15:20.387 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-09T20:15:20.454 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-09T20:15:20.472 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-09T20:15:20.482 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-09T20:15:20.542 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-09T20:15:21.078 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-09T20:15:21.094 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-09T20:15:21.100 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-09T20:15:21.110 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-09T20:15:21.116 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-09T20:15:21.123 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-09T20:15:21.128 INFO:teuthology.orchestra.run.vm05.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-09T20:15:21.131 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-09T20:15:21.163 INFO:teuthology.orchestra.run.vm05.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-09T20:15:21.219 INFO:teuthology.orchestra.run.vm05.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-09T20:15:21.239 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-09T20:15:21.247 INFO:teuthology.orchestra.run.vm05.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-09T20:15:21.254 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-09T20:15:21.262 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-09T20:15:21.270 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-09T20:15:21.279 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-09T20:15:21.285 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-09T20:15:21.321 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-09T20:15:21.335 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-09T20:15:21.380 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-09T20:15:21.702 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-09T20:15:21.735 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-09T20:15:21.743 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-09T20:15:21.819 INFO:teuthology.orchestra.run.vm05.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-09T20:15:21.822 INFO:teuthology.orchestra.run.vm05.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-09T20:15:21.852 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-09T20:15:22.274 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-09T20:15:22.430 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-09T20:15:23.326 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-09T20:15:23.367 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-09T20:15:23.380 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-09T20:15:23.386 INFO:teuthology.orchestra.run.vm05.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-09T20:15:23.552 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-09T20:15:23.558 INFO:teuthology.orchestra.run.vm05.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-09T20:15:23.598 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-09T20:15:23.608 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-09T20:15:23.618 INFO:teuthology.orchestra.run.vm05.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-09T20:15:23.829 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-09T20:15:23.829 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /sys 2026-03-09T20:15:23.829 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /proc 2026-03-09T20:15:23.829 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /mnt 2026-03-09T20:15:23.829 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /var/tmp 2026-03-09T20:15:23.829 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /home 2026-03-09T20:15:23.829 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /root 2026-03-09T20:15:23.829 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /tmp 2026-03-09T20:15:23.829 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:23.881 INFO:teuthology.orchestra.run.vm05.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-09T20:15:23.887 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-09T20:15:23.912 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-09T20:15:23.918 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-09T20:15:24.002 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-09T20:15:24.029 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-09T20:15:24.029 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:24.029 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T20:15:24.029 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T20:15:24.029 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T20:15:24.029 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:24.310 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-09T20:15:24.336 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-09T20:15:24.336 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:24.336 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T20:15:24.336 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T20:15:24.336 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T20:15:24.336 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:24.346 INFO:teuthology.orchestra.run.vm09.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-09T20:15:24.349 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-09T20:15:24.373 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-09T20:15:24.374 INFO:teuthology.orchestra.run.vm09.stdout:Creating group 'qat' with GID 994. 2026-03-09T20:15:24.374 INFO:teuthology.orchestra.run.vm09.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-09T20:15:24.374 INFO:teuthology.orchestra.run.vm09.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-09T20:15:24.374 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:24.401 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-09T20:15:24.529 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-09T20:15:24.529 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-09T20:15:24.529 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:24.613 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-09T20:15:24.702 INFO:teuthology.orchestra.run.vm09.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-09T20:15:24.708 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-09T20:15:24.723 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-09T20:15:24.723 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:24.723 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T20:15:24.723 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:25.131 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-09T20:15:25.227 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-09T20:15:25.251 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-09T20:15:25.270 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-09T20:15:25.292 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-09T20:15:25.393 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-09T20:15:25.409 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-09T20:15:25.443 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-09T20:15:25.497 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-09T20:15:25.549 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-09T20:15:25.560 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-09T20:15:25.571 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-09T20:15:25.577 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-09T20:15:25.577 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:25.577 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T20:15:25.577 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T20:15:25.577 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T20:15:25.577 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:25.580 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-09T20:15:25.588 INFO:teuthology.orchestra.run.vm05.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-09T20:15:25.594 INFO:teuthology.orchestra.run.vm05.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-09T20:15:25.597 INFO:teuthology.orchestra.run.vm05.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-09T20:15:25.620 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-09T20:15:25.638 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-09T20:15:25.642 INFO:teuthology.orchestra.run.vm09.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-09T20:15:25.649 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-09T20:15:25.672 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-09T20:15:25.676 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-09T20:15:25.942 INFO:teuthology.orchestra.run.vm05.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-09T20:15:25.948 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-09T20:15:25.999 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-09T20:15:25.999 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-09T20:15:25.999 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-09T20:15:25.999 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:26.004 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-09T20:15:26.259 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-09T20:15:26.266 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-09T20:15:26.833 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-09T20:15:27.038 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-09T20:15:27.105 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-09T20:15:27.221 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-09T20:15:27.224 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-09T20:15:27.249 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-09T20:15:27.249 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:27.249 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T20:15:27.249 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T20:15:27.249 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T20:15:27.249 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:27.414 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-09T20:15:27.427 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-09T20:15:28.004 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-09T20:15:28.007 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-09T20:15:28.033 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-09T20:15:28.033 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:28.033 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T20:15:28.033 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:15:28.033 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:15:28.033 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:28.045 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-09T20:15:28.067 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-09T20:15:28.067 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:28.067 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T20:15:28.067 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:28.226 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-09T20:15:28.250 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-09T20:15:28.250 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:28.250 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T20:15:28.250 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T20:15:28.250 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T20:15:28.250 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:30.895 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-09T20:15:30.907 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-09T20:15:30.913 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-09T20:15:30.972 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-09T20:15:30.981 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-09T20:15:30.985 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-09T20:15:30.985 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-09T20:15:31.003 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-09T20:15:31.003 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-09T20:15:32.819 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-09T20:15:32.819 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-09T20:15:32.819 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-09T20:15:32.819 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-09T20:15:32.819 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-09T20:15:32.819 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-09T20:15:32.819 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-09T20:15:32.819 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-09T20:15:32.820 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-09T20:15:32.821 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-09T20:15:32.822 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-09T20:15:32.823 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-09T20:15:32.824 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-09T20:15:32.825 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-09T20:15:32.954 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-09T20:15:32.954 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:32.954 INFO:teuthology.orchestra.run.vm09.stdout:Upgraded: 2026-03-09T20:15:32.954 INFO:teuthology.orchestra.run.vm09.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.954 INFO:teuthology.orchestra.run.vm09.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.954 INFO:teuthology.orchestra.run.vm09.stdout:Installed: 2026-03-09T20:15:32.954 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T20:15:32.954 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T20:15:32.955 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T20:15:32.956 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: zip-3.0-35.el9.x86_64 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:15:32.957 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:15:33.077 DEBUG:teuthology.parallel:result is None 2026-03-09T20:15:33.286 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-09T20:15:33.286 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /sys 2026-03-09T20:15:33.286 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /proc 2026-03-09T20:15:33.286 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /mnt 2026-03-09T20:15:33.286 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /var/tmp 2026-03-09T20:15:33.286 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /home 2026-03-09T20:15:33.286 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /root 2026-03-09T20:15:33.286 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /tmp 2026-03-09T20:15:33.286 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:33.426 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-09T20:15:33.455 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-09T20:15:33.455 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:33.455 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T20:15:33.455 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T20:15:33.455 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T20:15:33.455 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:33.705 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-09T20:15:33.734 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-09T20:15:33.734 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:33.734 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T20:15:33.734 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T20:15:33.734 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T20:15:33.734 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:33.746 INFO:teuthology.orchestra.run.vm05.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-09T20:15:33.749 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-09T20:15:33.771 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-09T20:15:33.771 INFO:teuthology.orchestra.run.vm05.stdout:Creating group 'qat' with GID 994. 2026-03-09T20:15:33.771 INFO:teuthology.orchestra.run.vm05.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-09T20:15:33.771 INFO:teuthology.orchestra.run.vm05.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-09T20:15:33.771 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:33.789 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-09T20:15:33.818 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-09T20:15:33.818 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-09T20:15:33.818 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:33.863 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-09T20:15:33.943 INFO:teuthology.orchestra.run.vm05.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-09T20:15:33.949 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-09T20:15:33.966 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-09T20:15:33.966 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:33.966 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T20:15:33.966 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:34.805 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-09T20:15:34.835 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-09T20:15:34.835 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:34.835 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T20:15:34.835 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T20:15:34.835 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T20:15:34.835 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:34.904 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-09T20:15:34.963 INFO:teuthology.orchestra.run.vm05.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-09T20:15:34.970 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-09T20:15:34.994 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-09T20:15:34.998 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-09T20:15:35.581 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-09T20:15:35.587 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-09T20:15:36.156 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-09T20:15:36.159 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-09T20:15:36.223 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-09T20:15:36.279 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-09T20:15:36.282 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-09T20:15:36.304 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-09T20:15:36.304 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:36.304 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T20:15:36.304 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T20:15:36.304 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T20:15:36.304 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:36.318 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-09T20:15:36.330 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-09T20:15:36.855 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-09T20:15:36.859 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-09T20:15:36.880 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-09T20:15:36.880 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:36.880 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T20:15:36.880 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:15:36.880 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:15:36.880 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:36.891 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-09T20:15:36.915 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-09T20:15:36.915 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:36.915 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T20:15:36.915 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:37.077 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-09T20:15:37.100 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-09T20:15:37.100 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:15:37.100 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T20:15:37.100 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T20:15:37.100 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T20:15:37.100 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:39.713 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-09T20:15:39.724 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-09T20:15:39.731 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-09T20:15:39.790 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-09T20:15:39.800 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-09T20:15:39.804 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-09T20:15:39.804 INFO:teuthology.orchestra.run.vm05.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-09T20:15:39.821 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-09T20:15:39.821 INFO:teuthology.orchestra.run.vm05.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-09T20:15:41.169 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-09T20:15:41.169 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-09T20:15:41.169 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-09T20:15:41.169 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-09T20:15:41.169 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-09T20:15:41.169 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-09T20:15:41.169 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-09T20:15:41.169 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-09T20:15:41.169 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-09T20:15:41.169 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-09T20:15:41.169 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-09T20:15:41.169 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-09T20:15:41.169 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-09T20:15:41.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-09T20:15:41.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-09T20:15:41.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-09T20:15:41.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-09T20:15:41.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-09T20:15:41.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-09T20:15:41.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-09T20:15:41.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-09T20:15:41.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-09T20:15:41.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-09T20:15:41.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-09T20:15:41.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-09T20:15:41.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-09T20:15:41.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-09T20:15:41.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-09T20:15:41.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-09T20:15:41.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout:Upgraded: 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout:Installed: 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:41.293 INFO:teuthology.orchestra.run.vm05.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T20:15:41.294 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: zip-3.0-35.el9.x86_64 2026-03-09T20:15:41.295 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:15:41.296 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:15:41.402 DEBUG:teuthology.parallel:result is None 2026-03-09T20:15:41.402 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:15:41.998 DEBUG:teuthology.orchestra.run.vm05:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-09T20:15:42.017 INFO:teuthology.orchestra.run.vm05.stdout:19.2.3-678.ge911bdeb.el9 2026-03-09T20:15:42.018 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-09T20:15:42.018 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-09T20:15:42.019 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:15:42.705 DEBUG:teuthology.orchestra.run.vm09:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-09T20:15:42.729 INFO:teuthology.orchestra.run.vm09.stdout:19.2.3-678.ge911bdeb.el9 2026-03-09T20:15:42.729 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-09T20:15:42.729 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-09T20:15:42.730 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-09T20:15:42.730 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:15:42.730 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T20:15:42.758 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T20:15:42.758 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T20:15:42.798 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-09T20:15:42.798 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:15:42.798 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T20:15:42.824 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T20:15:42.889 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T20:15:42.889 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T20:15:42.912 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T20:15:42.974 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-09T20:15:42.974 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:15:42.974 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T20:15:42.999 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T20:15:43.063 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T20:15:43.063 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T20:15:43.084 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T20:15:43.149 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-09T20:15:43.149 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:15:43.149 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T20:15:43.174 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T20:15:43.240 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T20:15:43.240 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T20:15:43.266 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T20:15:43.331 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T20:15:43.377 INFO:tasks.cephadm:Config: {'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'client': {'debug ms': 1}, 'global': {'mon election default strategy': 1, 'ms bind msgr2': False, 'ms type': 'async'}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20, 'mon warn on pool no app': False}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd class default list': '*', 'osd class load list': '*', 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'reached quota', 'but it is still running', 'overall HEALTH_', '\\(POOL_FULL\\)', '\\(SMALLER_PGP_NUM\\)', '\\(CACHE_POOL_NO_HIT_SET\\)', '\\(CACHE_POOL_NEAR_FULL\\)', '\\(POOL_APP_NOT_ENABLED\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', 'CEPHADM_STRAY_DAEMON'], 'log-only-match': ['CEPHADM_'], 'mon_bind_msgr2': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'cephadm_mode': 'cephadm-package'} 2026-03-09T20:15:43.377 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:15:43.377 INFO:tasks.cephadm:Cluster fsid is c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:15:43.377 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T20:15:43.377 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '[v1:192.168.123.105:6789]', 'mon.c': '[v1:192.168.123.105:6790]', 'mon.b': '[v1:192.168.123.109:6789]'} 2026-03-09T20:15:43.377 INFO:tasks.cephadm:First mon is mon.a on vm05 2026-03-09T20:15:43.377 INFO:tasks.cephadm:First mgr is y 2026-03-09T20:15:43.377 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T20:15:43.377 DEBUG:teuthology.orchestra.run.vm05:> sudo hostname $(hostname -s) 2026-03-09T20:15:43.401 DEBUG:teuthology.orchestra.run.vm09:> sudo hostname $(hostname -s) 2026-03-09T20:15:43.432 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-09T20:15:43.432 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T20:15:43.443 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T20:15:43.607 INFO:teuthology.orchestra.run.vm05.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T20:15:43.632 INFO:teuthology.orchestra.run.vm09.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T20:16:42.837 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T20:16:42.837 INFO:teuthology.orchestra.run.vm05.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T20:16:42.837 INFO:teuthology.orchestra.run.vm05.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T20:16:42.837 INFO:teuthology.orchestra.run.vm05.stdout: "repo_digests": [ 2026-03-09T20:16:42.837 INFO:teuthology.orchestra.run.vm05.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T20:16:42.837 INFO:teuthology.orchestra.run.vm05.stdout: ] 2026-03-09T20:16:42.837 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T20:17:50.011 INFO:teuthology.orchestra.run.vm09.stdout:{ 2026-03-09T20:17:50.011 INFO:teuthology.orchestra.run.vm09.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T20:17:50.011 INFO:teuthology.orchestra.run.vm09.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T20:17:50.011 INFO:teuthology.orchestra.run.vm09.stdout: "repo_digests": [ 2026-03-09T20:17:50.011 INFO:teuthology.orchestra.run.vm09.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T20:17:50.011 INFO:teuthology.orchestra.run.vm09.stdout: ] 2026-03-09T20:17:50.011 INFO:teuthology.orchestra.run.vm09.stdout:} 2026-03-09T20:17:50.030 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /etc/ceph 2026-03-09T20:17:50.055 DEBUG:teuthology.orchestra.run.vm09:> sudo mkdir -p /etc/ceph 2026-03-09T20:17:50.080 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 777 /etc/ceph 2026-03-09T20:17:50.118 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 777 /etc/ceph 2026-03-09T20:17:50.146 INFO:tasks.cephadm:Writing seed config... 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [client] debug ms = 1 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [global] ms bind msgr2 = False 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [global] ms type = async 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [mon] mon warn on pool no app = False 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [osd] osd class default list = * 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [osd] osd class load list = * 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T20:17:50.147 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-09T20:17:50.147 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:17:50.147 DEBUG:teuthology.orchestra.run.vm05:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T20:17:50.176 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = c0151936-1bf4-11f1-b896-23f7bea8a6ea mon election default strategy = 1 ms bind msgr2 = False ms type = async [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd class default list = * osd class load list = * osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 mon warn on pool no app = False [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true [client] debug ms = 1 2026-03-09T20:17:50.176 DEBUG:teuthology.orchestra.run.vm05:mon.a> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.a.service 2026-03-09T20:17:50.218 DEBUG:teuthology.orchestra.run.vm05:mgr.y> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mgr.y.service 2026-03-09T20:17:50.260 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T20:17:50.260 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-addrv '[v1:192.168.123.105:6789]' --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:17:50.407 INFO:teuthology.orchestra.run.vm05.stdout:-------------------------------------------------------------------------------- 2026-03-09T20:17:50.407 INFO:teuthology.orchestra.run.vm05.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'c0151936-1bf4-11f1-b896-23f7bea8a6ea', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-addrv', '[v1:192.168.123.105:6789]', '--skip-admin-label'] 2026-03-09T20:17:50.408 INFO:teuthology.orchestra.run.vm05.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-09T20:17:50.408 INFO:teuthology.orchestra.run.vm05.stdout:Verifying podman|docker is present... 2026-03-09T20:17:50.434 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stdout 5.8.0 2026-03-09T20:17:50.434 INFO:teuthology.orchestra.run.vm05.stdout:Verifying lvm2 is present... 2026-03-09T20:17:50.434 INFO:teuthology.orchestra.run.vm05.stdout:Verifying time synchronization is in place... 2026-03-09T20:17:50.446 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T20:17:50.446 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T20:17:50.460 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T20:17:50.460 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-09T20:17:50.476 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout enabled 2026-03-09T20:17:50.483 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout active 2026-03-09T20:17:50.483 INFO:teuthology.orchestra.run.vm05.stdout:Unit chronyd.service is enabled and running 2026-03-09T20:17:50.483 INFO:teuthology.orchestra.run.vm05.stdout:Repeating the final host check... 2026-03-09T20:17:50.503 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stdout 5.8.0 2026-03-09T20:17:50.503 INFO:teuthology.orchestra.run.vm05.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-09T20:17:50.503 INFO:teuthology.orchestra.run.vm05.stdout:systemctl is present 2026-03-09T20:17:50.503 INFO:teuthology.orchestra.run.vm05.stdout:lvcreate is present 2026-03-09T20:17:50.511 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T20:17:50.511 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T20:17:50.517 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T20:17:50.517 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-09T20:17:50.524 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout enabled 2026-03-09T20:17:50.531 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout active 2026-03-09T20:17:50.531 INFO:teuthology.orchestra.run.vm05.stdout:Unit chronyd.service is enabled and running 2026-03-09T20:17:50.531 INFO:teuthology.orchestra.run.vm05.stdout:Host looks OK 2026-03-09T20:17:50.531 INFO:teuthology.orchestra.run.vm05.stdout:Cluster fsid: c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:17:50.531 INFO:teuthology.orchestra.run.vm05.stdout:Acquiring lock 140427122102816 on /run/cephadm/c0151936-1bf4-11f1-b896-23f7bea8a6ea.lock 2026-03-09T20:17:50.531 INFO:teuthology.orchestra.run.vm05.stdout:Lock 140427122102816 acquired on /run/cephadm/c0151936-1bf4-11f1-b896-23f7bea8a6ea.lock 2026-03-09T20:17:50.531 INFO:teuthology.orchestra.run.vm05.stdout:Verifying IP 192.168.123.105 port 6789 ... 2026-03-09T20:17:50.532 INFO:teuthology.orchestra.run.vm05.stdout:Base mon IP(s) is [192.168.123.105:6789], mon addrv is [v1:192.168.123.105:6789] 2026-03-09T20:17:50.535 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.105 metric 100 2026-03-09T20:17:50.535 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.105 metric 100 2026-03-09T20:17:50.538 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T20:17:50.538 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-09T20:17:50.540 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T20:17:50.540 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-09T20:17:50.541 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T20:17:50.541 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-09T20:17:50.541 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:5/64 scope link noprefixroute 2026-03-09T20:17:50.541 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T20:17:50.541 INFO:teuthology.orchestra.run.vm05.stdout:Mon IP `192.168.123.105` is in CIDR network `192.168.123.0/24` 2026-03-09T20:17:50.541 INFO:teuthology.orchestra.run.vm05.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24'] 2026-03-09T20:17:50.541 INFO:teuthology.orchestra.run.vm05.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T20:17:50.542 INFO:teuthology.orchestra.run.vm05.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T20:17:51.795 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-09T20:17:51.795 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T20:17:51.795 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Getting image source signatures 2026-03-09T20:17:51.795 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-09T20:17:51.795 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-09T20:17:51.795 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-09T20:17:51.795 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-09T20:17:52.065 INFO:teuthology.orchestra.run.vm05.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T20:17:52.065 INFO:teuthology.orchestra.run.vm05.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T20:17:52.065 INFO:teuthology.orchestra.run.vm05.stdout:Extracting ceph user uid/gid from container image... 2026-03-09T20:17:52.334 INFO:teuthology.orchestra.run.vm05.stdout:stat: stdout 167 167 2026-03-09T20:17:52.334 INFO:teuthology.orchestra.run.vm05.stdout:Creating initial keys... 2026-03-09T20:17:52.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-authtool: stdout AQDwKq9p7fJaGhAAjp/6xBEI3IKHg9fzIZ2aGQ== 2026-03-09T20:17:52.820 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-authtool: stdout AQDwKq9pSU9UKBAA95mLisc2W+BU44KbfO73OQ== 2026-03-09T20:17:53.065 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-authtool: stdout AQDwKq9pVIURNxAArY/cvo0+vGWoujCxcGiyjQ== 2026-03-09T20:17:53.065 INFO:teuthology.orchestra.run.vm05.stdout:Creating initial monmap... 2026-03-09T20:17:53.312 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T20:17:53.312 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-09T20:17:53.312 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:17:53.312 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T20:17:53.312 INFO:teuthology.orchestra.run.vm05.stdout:monmaptool for a [v1:192.168.123.105:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T20:17:53.312 INFO:teuthology.orchestra.run.vm05.stdout:setting min_mon_release = quincy 2026-03-09T20:17:53.312 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: set fsid to c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:17:53.312 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T20:17:53.313 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:17:53.313 INFO:teuthology.orchestra.run.vm05.stdout:Creating mon... 2026-03-09T20:17:53.538 INFO:teuthology.orchestra.run.vm05.stdout:create mon.a on 2026-03-09T20:17:53.705 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-09T20:17:53.832 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T20:17:53.959 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea.target → /etc/systemd/system/ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea.target. 2026-03-09T20:17:53.959 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea.target → /etc/systemd/system/ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea.target. 2026-03-09T20:17:54.104 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.a 2026-03-09T20:17:54.104 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to reset failed state of unit ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.a.service: Unit ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.a.service not loaded. 2026-03-09T20:17:54.244 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea.target.wants/ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.a.service → /etc/systemd/system/ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@.service. 2026-03-09T20:17:54.422 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-09T20:17:54.422 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T20:17:54.422 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mon to start... 2026-03-09T20:17:54.422 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mon... 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout cluster: 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout id: c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout services: 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.16987s) 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout data: 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout pgs: 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.574+0000 7f6c65cfd640 1 Processor -- start 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.575+0000 7f6c65cfd640 1 -- start start 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.575+0000 7f6c65cfd640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6c6010cd80 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.576+0000 7f6c5f7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f6c60108950 0x7f6c60108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51042/0 (socket says 192.168.123.105:51042) 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.576+0000 7f6c5f7fe640 1 -- 192.168.123.105:0/1436792132 learned_addr learned my addr 192.168.123.105:0/1436792132 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.576+0000 7f6c5e7fc640 1 -- 192.168.123.105:0/1436792132 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4004104021 0 0) 0x7f6c6010cd80 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.576+0000 7f6c5e7fc640 1 -- 192.168.123.105:0/1436792132 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6c44003620 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.577+0000 7f6c5e7fc640 1 -- 192.168.123.105:0/1436792132 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 1143530961 0 0) 0x7f6c44003620 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.577+0000 7f6c5e7fc640 1 -- 192.168.123.105:0/1436792132 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6c6010df60 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.577+0000 7f6c5e7fc640 1 -- 192.168.123.105:0/1436792132 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f6c50002e10 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.577+0000 7f6c5e7fc640 1 -- 192.168.123.105:0/1436792132 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(0 keys) ==== 4+0+0 (unknown 0 0 0) 0x7f6c50003020 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.577+0000 7f6c5e7fc640 1 -- 192.168.123.105:0/1436792132 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f6c50003420 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.578+0000 7f6c65cfd640 1 -- 192.168.123.105:0/1436792132 >> v1:192.168.123.105:6789/0 conn(0x7f6c60108950 legacy=0x7f6c60108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.578+0000 7f6c65cfd640 1 -- 192.168.123.105:0/1436792132 shutdown_connections 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.578+0000 7f6c65cfd640 1 -- 192.168.123.105:0/1436792132 >> 192.168.123.105:0/1436792132 conn(0x7f6c6007bdf0 msgr2=0x7f6c6007c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.578+0000 7f6c65cfd640 1 -- 192.168.123.105:0/1436792132 shutdown_connections 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.578+0000 7f6c65cfd640 1 -- 192.168.123.105:0/1436792132 wait complete. 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.579+0000 7f6c65cfd640 1 Processor -- start 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.579+0000 7f6c65cfd640 1 -- start start 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.579+0000 7f6c65cfd640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6c6019e3b0 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.579+0000 7f6c5f7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f6c60108950 0x7f6c6019dca0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51050/0 (socket says 192.168.123.105:51050) 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.579+0000 7f6c5f7fe640 1 -- 192.168.123.105:0/582173131 learned_addr learned my addr 192.168.123.105:0/582173131 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.580+0000 7f6c5cff9640 1 -- 192.168.123.105:0/582173131 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3497765401 0 0) 0x7f6c6019e3b0 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.580+0000 7f6c5cff9640 1 -- 192.168.123.105:0/582173131 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6c38003620 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.580+0000 7f6c5cff9640 1 -- 192.168.123.105:0/582173131 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2214592853 0 0) 0x7f6c38003620 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.580+0000 7f6c5cff9640 1 -- 192.168.123.105:0/582173131 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6c6019e3b0 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.580+0000 7f6c5cff9640 1 -- 192.168.123.105:0/582173131 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f6c500032e0 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.580+0000 7f6c5cff9640 1 -- 192.168.123.105:0/582173131 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 878880369 0 0) 0x7f6c6019e3b0 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.580+0000 7f6c5cff9640 1 -- 192.168.123.105:0/582173131 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6c6019e580 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.580+0000 7f6c65cfd640 1 -- 192.168.123.105:0/582173131 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f6c6019e890 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.580+0000 7f6c65cfd640 1 -- 192.168.123.105:0/582173131 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f6c601a2420 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.581+0000 7f6c5cff9640 1 -- 192.168.123.105:0/582173131 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(0 keys) ==== 4+0+0 (unknown 0 0 0) 0x7f6c50003020 con 0x7f6c60108950 2026-03-09T20:17:54.769 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.581+0000 7f6c5cff9640 1 -- 192.168.123.105:0/582173131 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f6c50004cf0 con 0x7f6c60108950 2026-03-09T20:17:54.770 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.581+0000 7f6c5cff9640 1 -- 192.168.123.105:0/582173131 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 1) ==== 811+0+0 (unknown 4133961934 0 0) 0x7f6c500052a0 con 0x7f6c60108950 2026-03-09T20:17:54.770 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.581+0000 7f6c5cff9640 1 -- 192.168.123.105:0/582173131 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 1808929742 0 0) 0x7f6c50006820 con 0x7f6c60108950 2026-03-09T20:17:54.770 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.582+0000 7f6c65cfd640 1 -- 192.168.123.105:0/582173131 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6c2c005180 con 0x7f6c60108950 2026-03-09T20:17:54.770 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.583+0000 7f6c5cff9640 1 -- 192.168.123.105:0/582173131 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (unknown 1092875540 0 4127419540) 0x7f6c50006a70 con 0x7f6c60108950 2026-03-09T20:17:54.770 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.616+0000 7f6c65cfd640 1 -- 192.168.123.105:0/582173131 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "status"} v 0) -- 0x7f6c2c005c80 con 0x7f6c60108950 2026-03-09T20:17:54.770 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.617+0000 7f6c5cff9640 1 -- 192.168.123.105:0/582173131 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "status"}]=0 v0) ==== 54+0+316 (unknown 1155462804 0 2713923385) 0x7f6c500057a0 con 0x7f6c60108950 2026-03-09T20:17:54.770 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.618+0000 7f6c65cfd640 1 -- 192.168.123.105:0/582173131 >> v1:192.168.123.105:6789/0 conn(0x7f6c60108950 legacy=0x7f6c6019dca0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:17:54.770 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.619+0000 7f6c65cfd640 1 -- 192.168.123.105:0/582173131 shutdown_connections 2026-03-09T20:17:54.770 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.619+0000 7f6c65cfd640 1 -- 192.168.123.105:0/582173131 >> 192.168.123.105:0/582173131 conn(0x7f6c6007bdf0 msgr2=0x7f6c60107780 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:17:54.770 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.619+0000 7f6c65cfd640 1 -- 192.168.123.105:0/582173131 shutdown_connections 2026-03-09T20:17:54.770 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.619+0000 7f6c65cfd640 1 -- 192.168.123.105:0/582173131 wait complete. 2026-03-09T20:17:54.770 INFO:teuthology.orchestra.run.vm05.stdout:mon is available 2026-03-09T20:17:54.770 INFO:teuthology.orchestra.run.vm05.stdout:Assimilating anything we can from ceph.conf... 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout fsid = c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_host = [v1:192.168.123.105:6789] 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.903+0000 7f7c1d883640 1 Processor -- start 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.904+0000 7f7c1d883640 1 -- start start 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.904+0000 7f7c1d883640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7c18080ed0 con 0x7f7c1810d960 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.904+0000 7f7c16ffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f7c1810d960 0x7f7c1810fd50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51052/0 (socket says 192.168.123.105:51052) 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.904+0000 7f7c16ffd640 1 -- 192.168.123.105:0/2420041882 learned_addr learned my addr 192.168.123.105:0/2420041882 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.905+0000 7f7c15ffb640 1 -- 192.168.123.105:0/2420041882 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2949356809 0 0) 0x7f7c18080ed0 con 0x7f7c1810d960 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.905+0000 7f7c15ffb640 1 -- 192.168.123.105:0/2420041882 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7c04003620 con 0x7f7c1810d960 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.905+0000 7f7c15ffb640 1 -- 192.168.123.105:0/2420041882 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3263294132 0 0) 0x7f7c04003620 con 0x7f7c1810d960 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.905+0000 7f7c15ffb640 1 -- 192.168.123.105:0/2420041882 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7c18112480 con 0x7f7c1810d960 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.905+0000 7f7c15ffb640 1 -- 192.168.123.105:0/2420041882 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f7c00002e10 con 0x7f7c1810d960 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.905+0000 7f7c15ffb640 1 -- 192.168.123.105:0/2420041882 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(0 keys) ==== 4+0+0 (unknown 0 0 0) 0x7f7c00003020 con 0x7f7c1810d960 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.905+0000 7f7c15ffb640 1 -- 192.168.123.105:0/2420041882 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f7c00003420 con 0x7f7c1810d960 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.906+0000 7f7c1d883640 1 -- 192.168.123.105:0/2420041882 >> v1:192.168.123.105:6789/0 conn(0x7f7c1810d960 legacy=0x7f7c1810fd50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.906+0000 7f7c1d883640 1 -- 192.168.123.105:0/2420041882 shutdown_connections 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.906+0000 7f7c1d883640 1 -- 192.168.123.105:0/2420041882 >> 192.168.123.105:0/2420041882 conn(0x7f7c1807be30 msgr2=0x7f7c1807e290 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.906+0000 7f7c1d883640 1 -- 192.168.123.105:0/2420041882 shutdown_connections 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.906+0000 7f7c1d883640 1 -- 192.168.123.105:0/2420041882 wait complete. 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.907+0000 7f7c1d883640 1 Processor -- start 2026-03-09T20:17:55.081 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.907+0000 7f7c1d883640 1 -- start start 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.907+0000 7f7c1d883640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7c181a28b0 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.907+0000 7f7c16ffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f7c1810d960 0x7f7c181a21a0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51054/0 (socket says 192.168.123.105:51054) 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.907+0000 7f7c16ffd640 1 -- 192.168.123.105:0/994014221 learned_addr learned my addr 192.168.123.105:0/994014221 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.908+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3248043751 0 0) 0x7f7c181a28b0 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.908+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7bec003620 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.908+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2314044477 0 0) 0x7f7bec003620 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.908+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f7c181a28b0 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.908+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f7c000032e0 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.908+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2381496423 0 0) 0x7f7c181a28b0 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.908+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7c181a2a80 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.908+0000 7f7c1d883640 1 -- 192.168.123.105:0/994014221 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f7c181a2d90 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.908+0000 7f7c1d883640 1 -- 192.168.123.105:0/994014221 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f7c181a6920 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.909+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(0 keys) ==== 4+0+0 (unknown 0 0 0) 0x7f7c00003020 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.909+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f7c00004cf0 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.909+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 1) ==== 811+0+0 (unknown 4133961934 0 0) 0x7f7c00005220 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.909+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 1808929742 0 0) 0x7f7c00006750 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.910+0000 7f7c1d883640 1 -- 192.168.123.105:0/994014221 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7bdc005180 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.911+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (unknown 1092875540 0 4127419540) 0x7f7c000069a0 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.941+0000 7f7c1d883640 1 -- 192.168.123.105:0/994014221 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "config assimilate-conf"} v 0) -- 0x7f7bdc005470 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.947+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v2) ==== 70+0+356 (unknown 1213389831 0 2201627273) 0x7f7c00005780 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.948+0000 7f7c1c881640 1 -- 192.168.123.105:0/994014221 <== mon.0 v1:192.168.123.105:6789/0 11 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f7c00005e90 con 0x7f7c1810d960 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.949+0000 7f7c1d883640 1 -- 192.168.123.105:0/994014221 >> v1:192.168.123.105:6789/0 conn(0x7f7c1810d960 legacy=0x7f7c181a21a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.949+0000 7f7c1d883640 1 -- 192.168.123.105:0/994014221 shutdown_connections 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.949+0000 7f7c1d883640 1 -- 192.168.123.105:0/994014221 >> 192.168.123.105:0/994014221 conn(0x7f7c1807be30 msgr2=0x7f7c1810e280 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.950+0000 7f7c1d883640 1 -- 192.168.123.105:0/994014221 shutdown_connections 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:54.950+0000 7f7c1d883640 1 -- 192.168.123.105:0/994014221 wait complete. 2026-03-09T20:17:55.082 INFO:teuthology.orchestra.run.vm05.stdout:Generating new minimal ceph.conf... 2026-03-09T20:17:55.377 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.212+0000 7f7cb8818640 1 Processor -- start 2026-03-09T20:17:55.377 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.213+0000 7f7cb8818640 1 -- start start 2026-03-09T20:17:55.377 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.213+0000 7f7cb8818640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7cb4108f60 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.213+0000 7f7cb2d76640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f7cb4104b90 0x7f7cb4104f90 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51068/0 (socket says 192.168.123.105:51068) 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.213+0000 7f7cb2d76640 1 -- 192.168.123.105:0/3933175363 learned_addr learned my addr 192.168.123.105:0/3933175363 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.214+0000 7f7cb1d74640 1 -- 192.168.123.105:0/3933175363 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4069177793 0 0) 0x7f7cb4108f60 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.214+0000 7f7cb1d74640 1 -- 192.168.123.105:0/3933175363 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7ca0003540 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.214+0000 7f7cb1d74640 1 -- 192.168.123.105:0/3933175363 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 882961283 0 0) 0x7f7ca0003540 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.214+0000 7f7cb1d74640 1 -- 192.168.123.105:0/3933175363 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7cb410a140 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.214+0000 7f7cb1d74640 1 -- 192.168.123.105:0/3933175363 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f7c9c002e10 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.214+0000 7f7cb1d74640 1 -- 192.168.123.105:0/3933175363 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f7c9c0033e0 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.214+0000 7f7cb1d74640 1 -- 192.168.123.105:0/3933175363 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f7c9c005710 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.215+0000 7f7cb8818640 1 -- 192.168.123.105:0/3933175363 >> v1:192.168.123.105:6789/0 conn(0x7f7cb4104b90 legacy=0x7f7cb4104f90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.215+0000 7f7cb8818640 1 -- 192.168.123.105:0/3933175363 shutdown_connections 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.215+0000 7f7cb8818640 1 -- 192.168.123.105:0/3933175363 >> 192.168.123.105:0/3933175363 conn(0x7f7cb40fff40 msgr2=0x7f7cb4102360 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.215+0000 7f7cb8818640 1 -- 192.168.123.105:0/3933175363 shutdown_connections 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.215+0000 7f7cb8818640 1 -- 192.168.123.105:0/3933175363 wait complete. 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.216+0000 7f7cb8818640 1 Processor -- start 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.216+0000 7f7cb8818640 1 -- start start 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.216+0000 7f7cb8818640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7cb419a410 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.216+0000 7f7cb2d76640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f7cb4104b90 0x7f7cb4199d00 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51072/0 (socket says 192.168.123.105:51072) 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.216+0000 7f7cb2d76640 1 -- 192.168.123.105:0/2308763832 learned_addr learned my addr 192.168.123.105:0/2308763832 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.216+0000 7f7c93fff640 1 -- 192.168.123.105:0/2308763832 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3052110094 0 0) 0x7f7cb419a410 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.217+0000 7f7c93fff640 1 -- 192.168.123.105:0/2308763832 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7c88003620 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.217+0000 7f7c93fff640 1 -- 192.168.123.105:0/2308763832 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 214899098 0 0) 0x7f7c88003620 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.217+0000 7f7c93fff640 1 -- 192.168.123.105:0/2308763832 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f7cb419a410 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.217+0000 7f7c93fff640 1 -- 192.168.123.105:0/2308763832 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f7c9c002890 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.217+0000 7f7c93fff640 1 -- 192.168.123.105:0/2308763832 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1458470428 0 0) 0x7f7cb419a410 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.217+0000 7f7c93fff640 1 -- 192.168.123.105:0/2308763832 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7cb419a5e0 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.217+0000 7f7cb8818640 1 -- 192.168.123.105:0/2308763832 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f7cb419a8d0 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.217+0000 7f7cb8818640 1 -- 192.168.123.105:0/2308763832 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f7cb419e410 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.218+0000 7f7c93fff640 1 -- 192.168.123.105:0/2308763832 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f7c9c004c10 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.218+0000 7f7c93fff640 1 -- 192.168.123.105:0/2308763832 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f7c9c005ee0 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.218+0000 7f7c93fff640 1 -- 192.168.123.105:0/2308763832 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 1) ==== 811+0+0 (unknown 4133961934 0 0) 0x7f7c9c006510 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.218+0000 7f7c93fff640 1 -- 192.168.123.105:0/2308763832 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 1808929742 0 0) 0x7f7c9c007ba0 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.219+0000 7f7cb8818640 1 -- 192.168.123.105:0/2308763832 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7cb4109490 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.220+0000 7f7c93fff640 1 -- 192.168.123.105:0/2308763832 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (unknown 1092875540 0 4127419540) 0x7f7c9c002a80 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.250+0000 7f7cb8818640 1 -- 192.168.123.105:0/2308763832 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "config generate-minimal-conf"} v 0) -- 0x7f7cb4063880 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.250+0000 7f7c93fff640 1 -- 192.168.123.105:0/2308763832 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v2) ==== 76+0+150 (unknown 1452402520 0 997669112) 0x7f7c9c002d00 con 0x7f7cb4104b90 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.251+0000 7f7cb8818640 1 -- 192.168.123.105:0/2308763832 >> v1:192.168.123.105:6789/0 conn(0x7f7cb4104b90 legacy=0x7f7cb4199d00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.251+0000 7f7cb8818640 1 -- 192.168.123.105:0/2308763832 shutdown_connections 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.251+0000 7f7cb8818640 1 -- 192.168.123.105:0/2308763832 >> 192.168.123.105:0/2308763832 conn(0x7f7cb40fff40 msgr2=0x7f7cb41085f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.251+0000 7f7cb8818640 1 -- 192.168.123.105:0/2308763832 shutdown_connections 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:55.251+0000 7f7cb8818640 1 -- 192.168.123.105:0/2308763832 wait complete. 2026-03-09T20:17:55.378 INFO:teuthology.orchestra.run.vm05.stdout:Restarting the monitor... 2026-03-09T20:17:55.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 podman[51756]: 2026-03-09 20:17:55.599691429 +0000 UTC m=+0.150458589 container died 0255d2d52432b1e75cb568690481845059452a76497ad04ec3d62d9eea6c64f7 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-a, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS) 2026-03-09T20:17:55.903 INFO:teuthology.orchestra.run.vm05.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 podman[51756]: 2026-03-09 20:17:55.713933312 +0000 UTC m=+0.264700472 container remove 0255d2d52432b1e75cb568690481845059452a76497ad04ec3d62d9eea6c64f7 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-a, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 bash[51756]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-a 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.a.service: Deactivated successfully. 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 systemd[1]: Stopped Ceph mon.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 systemd[1]: Starting Ceph mon.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 podman[51836]: 2026-03-09 20:17:55.855381993 +0000 UTC m=+0.016186741 container create ba64bd2624e60c39fccd2d7245de0169ab39fa4063b2240f07f30a01a6b5ac67 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-a, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0) 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 podman[51836]: 2026-03-09 20:17:55.895300719 +0000 UTC m=+0.056105467 container init ba64bd2624e60c39fccd2d7245de0169ab39fa4063b2240f07f30a01a6b5ac67 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-a, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 podman[51836]: 2026-03-09 20:17:55.899669487 +0000 UTC m=+0.060474235 container start ba64bd2624e60c39fccd2d7245de0169ab39fa4063b2240f07f30a01a6b5ac67 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-a, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 bash[51836]: ba64bd2624e60c39fccd2d7245de0169ab39fa4063b2240f07f30a01a6b5ac67 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 podman[51836]: 2026-03-09 20:17:55.849536972 +0000 UTC m=+0.010341730 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 systemd[1]: Started Ceph mon.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: set uid:gid to 167:167 (ceph:ceph) 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 6 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: pidfile_write: ignore empty --pid-file 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: load: jerasure load: lrc 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: RocksDB version: 7.9.2 2026-03-09T20:17:55.978 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Git sha 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: DB SUMMARY 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: DB Session ID: YSMJLR8MRH2I8ZHQTMT1 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: CURRENT file: CURRENT 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: IDENTITY file: IDENTITY 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 88069 ; 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.error_if_exists: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.create_if_missing: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.paranoid_checks: 1 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.env: 0x5654226a3dc0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.fs: PosixFileSystem 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.info_log: 0x565422f51820 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_file_opening_threads: 16 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.statistics: (nil) 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.use_fsync: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_log_file_size: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.keep_log_file_num: 1000 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.recycle_log_file_num: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.allow_fallocate: 1 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.allow_mmap_reads: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.allow_mmap_writes: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.use_direct_reads: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.create_missing_column_families: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.db_log_dir: 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.wal_dir: 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.advise_random_on_open: 1 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.db_write_buffer_size: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.write_buffer_manager: 0x565422f55900 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.rate_limiter: (nil) 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.wal_recovery_mode: 2 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.enable_thread_tracking: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.enable_pipelined_write: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.unordered_write: 0 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T20:17:55.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.row_cache: None 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.wal_filter: None 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.allow_ingest_behind: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.two_write_queues: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.manual_wal_flush: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.wal_compression: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.atomic_flush: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.log_readahead_size: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.best_efforts_recovery: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.allow_data_in_errors: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.db_host_id: __hostname__ 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_background_jobs: 2 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_background_compactions: -1 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_subcompactions: 1 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_total_wal_size: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_open_files: -1 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.bytes_per_sync: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compaction_readahead_size: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_background_flushes: -1 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Compression algorithms supported: 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: kZSTD supported: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: kXpressCompression supported: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: kBZip2Compression supported: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: kLZ4Compression supported: 1 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: kZlibCompression supported: 1 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: kLZ4HCCompression supported: 1 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: kSnappyCompression supported: 1 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T20:17:55.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.merge_operator: 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compaction_filter: None 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compaction_filter_factory: None 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.sst_partitioner_factory: None 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565422f51460) 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: cache_index_and_filter_blocks: 1 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: pin_top_level_index_and_filter: 1 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: index_type: 0 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: data_block_index_type: 0 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: index_shortening: 1 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: checksum: 4 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: no_block_cache: 0 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: block_cache: 0x565422f749b0 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: block_cache_name: BinnedLRUCache 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: block_cache_options: 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: capacity : 536870912 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: num_shard_bits : 4 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: strict_capacity_limit : 0 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: high_pri_pool_ratio: 0.000 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: block_cache_compressed: (nil) 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: persistent_cache: (nil) 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: block_size: 4096 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: block_size_deviation: 10 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: block_restart_interval: 16 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: index_block_restart_interval: 1 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: metadata_block_size: 4096 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: partition_filters: 0 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: use_delta_encoding: 1 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: filter_policy: bloomfilter 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: whole_key_filtering: 1 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: verify_compression: 0 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: read_amp_bytes_per_bit: 0 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: format_version: 5 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: enable_index_compression: 1 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: block_align: 0 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: max_auto_readahead_size: 262144 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: prepopulate_block_cache: 0 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: initial_auto_readahead_size: 8192 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout: num_file_reads_for_auto_readahead: 2 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.write_buffer_size: 33554432 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_write_buffer_number: 2 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compression: NoCompression 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.bottommost_compression: Disabled 2026-03-09T20:17:55.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.prefix_extractor: nullptr 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.num_levels: 7 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compression_opts.level: 32767 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compression_opts.strategy: 0 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compression_opts.enabled: false 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.target_file_size_base: 67108864 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.arena_block_size: 1048576 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.disable_auto_compactions: 0 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.inplace_update_support: 0 2026-03-09T20:17:55.982 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.bloom_locality: 0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.max_successive_merges: 0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.paranoid_file_checks: 0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.force_consistency_checks: 1 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.report_bg_io_stats: 0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.ttl: 2592000 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.enable_blob_files: false 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.min_blob_size: 0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.blob_file_size: 268435456 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.blob_file_starting_level: 0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bb021d04-c453-4a41-ac83-1c417d2adc83 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773087475922378, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773087475923967, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 84690, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 258, "table_properties": {"data_size": 82841, "index_size": 238, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 10837, "raw_average_key_size": 48, "raw_value_size": 76758, "raw_average_value_size": 341, "num_data_blocks": 10, "num_entries": 225, "num_filter_entries": 225, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773087475, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bb021d04-c453-4a41-ac83-1c417d2adc83", "db_session_id": "YSMJLR8MRH2I8ZHQTMT1", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773087475924023, "job": 1, "event": "recovery_finished"} 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x565422f76e00 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: DB pointer 0x565423088000 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: ** DB Stats ** 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: ** Compaction Stats [default] ** 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: L0 2/0 84.53 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 59.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: Sum 2/0 84.53 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 59.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 59.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: ** Compaction Stats [default] ** 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 59.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T20:17:55.983 INFO:journalctl@ceph.mon.a.vm05.stdout: 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: Cumulative compaction: 0.00 GB write, 16.25 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: Interval compaction: 0.00 GB write, 16.25 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: Block cache BinnedLRUCache@0x565422f749b0#6 capacity: 512.00 MB usage: 26.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6e-06 secs_since: 0 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: Block cache entry stats(count,size,portion): DataBlock(3,25.48 KB,0.00486076%) FilterBlock(2,0.77 KB,0.000146031%) IndexBlock(2,0.42 KB,8.04663e-05%) Misc(1,0.00 KB,0%) 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: starting mon.a rank 0 at public addrs v1:192.168.123.105:6789/0 at bind addrs v1:192.168.123.105:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: mon.a@-1(???) e1 preinit fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: mon.a@-1(???).mds e1 new map 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: mon.a@-1(???).mds e1 print_map 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: e1 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: btime 2026-03-09T20:17:54:448734+0000 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: legacy client fscid: -1 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout: No filesystems configured 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:17:55.984 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-09T20:17:56.230 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:17:56.230 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: monmap epoch 1 2026-03-09T20:17:56.230 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:17:56.230 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: last_changed 2026-03-09T20:17:53.169307+0000 2026-03-09T20:17:56.230 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: created 2026-03-09T20:17:53.169307+0000 2026-03-09T20:17:56.230 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: min_mon_release 19 (squid) 2026-03-09T20:17:56.230 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: election_strategy: 1 2026-03-09T20:17:56.230 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-09T20:17:56.230 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: fsmap 2026-03-09T20:17:56.230 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: osdmap e1: 0 total, 0 up, 0 in 2026-03-09T20:17:56.230 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:55 vm05 ceph-mon[51870]: mgrmap e1: no daemons active 2026-03-09T20:17:56.240 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.062+0000 7f572af63640 1 Processor -- start 2026-03-09T20:17:56.240 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.063+0000 7f572af63640 1 -- start start 2026-03-09T20:17:56.240 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.064+0000 7f572af63640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f572410cd80 con 0x7f5724108950 2026-03-09T20:17:56.240 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.064+0000 7f5728cd8640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f5724108950 0x7f5724108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51080/0 (socket says 192.168.123.105:51080) 2026-03-09T20:17:56.240 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.064+0000 7f5728cd8640 1 -- 192.168.123.105:0/3529654722 learned_addr learned my addr 192.168.123.105:0/3529654722 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:17:56.240 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.064+0000 7f571b7fe640 1 -- 192.168.123.105:0/3529654722 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3230873174 0 0) 0x7f572410cd80 con 0x7f5724108950 2026-03-09T20:17:56.240 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.064+0000 7f571b7fe640 1 -- 192.168.123.105:0/3529654722 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f56fc003620 con 0x7f5724108950 2026-03-09T20:17:56.240 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.065+0000 7f571b7fe640 1 -- 192.168.123.105:0/3529654722 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3140744623 0 0) 0x7f56fc003620 con 0x7f5724108950 2026-03-09T20:17:56.240 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.065+0000 7f571b7fe640 1 -- 192.168.123.105:0/3529654722 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f572410df60 con 0x7f5724108950 2026-03-09T20:17:56.240 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.065+0000 7f571b7fe640 1 -- 192.168.123.105:0/3529654722 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f570c002e10 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.065+0000 7f571b7fe640 1 -- 192.168.123.105:0/3529654722 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f570c0033e0 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.065+0000 7f572af63640 1 -- 192.168.123.105:0/3529654722 >> v1:192.168.123.105:6789/0 conn(0x7f5724108950 legacy=0x7f5724108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.065+0000 7f572af63640 1 -- 192.168.123.105:0/3529654722 shutdown_connections 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.065+0000 7f572af63640 1 -- 192.168.123.105:0/3529654722 >> 192.168.123.105:0/3529654722 conn(0x7f572407bdf0 msgr2=0x7f572407c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.066+0000 7f572af63640 1 -- 192.168.123.105:0/3529654722 shutdown_connections 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.066+0000 7f572af63640 1 -- 192.168.123.105:0/3529654722 wait complete. 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.066+0000 7f572af63640 1 Processor -- start 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.066+0000 7f572af63640 1 -- start start 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.066+0000 7f572af63640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f572419ec80 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.067+0000 7f5728cd8640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f5724108950 0x7f572419e570 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51084/0 (socket says 192.168.123.105:51084) 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.067+0000 7f5728cd8640 1 -- 192.168.123.105:0/2549523660 learned_addr learned my addr 192.168.123.105:0/2549523660 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.067+0000 7f5719ffb640 1 -- 192.168.123.105:0/2549523660 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 495165181 0 0) 0x7f572419ec80 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.067+0000 7f5719ffb640 1 -- 192.168.123.105:0/2549523660 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f56f4003620 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.067+0000 7f5719ffb640 1 -- 192.168.123.105:0/2549523660 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3451376816 0 0) 0x7f56f4003620 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.067+0000 7f5719ffb640 1 -- 192.168.123.105:0/2549523660 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f572419ec80 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.067+0000 7f5719ffb640 1 -- 192.168.123.105:0/2549523660 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f570c003170 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.067+0000 7f5719ffb640 1 -- 192.168.123.105:0/2549523660 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3478488977 0 0) 0x7f572419ec80 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.067+0000 7f5719ffb640 1 -- 192.168.123.105:0/2549523660 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f572419ee50 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.067+0000 7f572af63640 1 -- 192.168.123.105:0/2549523660 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f572419f160 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.067+0000 7f5719ffb640 1 -- 192.168.123.105:0/2549523660 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f570c004d10 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.068+0000 7f5719ffb640 1 -- 192.168.123.105:0/2549523660 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f570c006130 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.068+0000 7f572af63640 1 -- 192.168.123.105:0/2549523660 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f57241a2c70 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.068+0000 7f5719ffb640 1 -- 192.168.123.105:0/2549523660 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 1) ==== 811+0+0 (unknown 4133961934 0 0) 0x7f570c006680 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.068+0000 7f5719ffb640 1 -- 192.168.123.105:0/2549523660 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 1808929742 0 0) 0x7f570c007d80 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.069+0000 7f572af63640 1 -- 192.168.123.105:0/2549523660 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f56f0005180 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.070+0000 7f5719ffb640 1 -- 192.168.123.105:0/2549523660 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (unknown 1092875540 0 4127419540) 0x7f570c0048e0 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.100+0000 7f572af63640 1 -- 192.168.123.105:0/2549523660 --> v1:192.168.123.105:6789/0 -- mon_command([{prefix=config set, name=public_network}] v 0) -- 0x7f56f0005470 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.103+0000 7f5719ffb640 1 -- 192.168.123.105:0/2549523660 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{prefix=config set, name=public_network}]=0 v3)=0 v3) ==== 127+0+0 (unknown 808082368 0 0) 0x7f570c004ac0 con 0x7f5724108950 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.104+0000 7f572af63640 1 -- 192.168.123.105:0/2549523660 >> v1:192.168.123.105:6789/0 conn(0x7f5724108950 legacy=0x7f572419e570 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.105+0000 7f572af63640 1 -- 192.168.123.105:0/2549523660 shutdown_connections 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.105+0000 7f572af63640 1 -- 192.168.123.105:0/2549523660 >> 192.168.123.105:0/2549523660 conn(0x7f572407bdf0 msgr2=0x7f57241057c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.105+0000 7f572af63640 1 -- 192.168.123.105:0/2549523660 shutdown_connections 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.105+0000 7f572af63640 1 -- 192.168.123.105:0/2549523660 wait complete. 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:Creating mgr... 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-09T20:17:56.241 INFO:teuthology.orchestra.run.vm05.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-09T20:17:56.391 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mgr.y 2026-03-09T20:17:56.391 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to reset failed state of unit ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mgr.y.service: Unit ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mgr.y.service not loaded. 2026-03-09T20:17:56.508 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea.target.wants/ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mgr.y.service → /etc/systemd/system/ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@.service. 2026-03-09T20:17:56.676 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-09T20:17:56.676 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T20:17:56.677 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-09T20:17:56.677 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-09T20:17:56.677 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr to start... 2026-03-09T20:17:56.677 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr... 2026-03-09T20:17:56.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T20:17:56.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:17:56.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsid": "c0151936-1bf4-11f1-b896-23f7bea8a6ea", 2026-03-09T20:17:56.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T20:17:56.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T20:17:56.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T20:17:56.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T20:17:57.000 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:57.000 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T20:17:57.000 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T20:17:57.000 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 0 2026-03-09T20:17:57.000 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:17:57.000 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T20:17:57.000 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T20:17:57.000 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:17:57.000 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-09T20:17:57.000 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T20:17:57.000 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:17:57.000 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T20:17:54:448734+0000", 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T20:17:54.449435+0000", 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.816+0000 7f7011032640 1 Processor -- start 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.817+0000 7f7011032640 1 -- start start 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.817+0000 7f7011032640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f700c111530 con 0x7f700c074160 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.817+0000 7f700bfff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f700c074160 0x7f700c074560 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51110/0 (socket says 192.168.123.105:51110) 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.817+0000 7f700bfff640 1 -- 192.168.123.105:0/3283936861 learned_addr learned my addr 192.168.123.105:0/3283936861 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.818+0000 7f700affd640 1 -- 192.168.123.105:0/3283936861 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1341574606 0 0) 0x7f700c111530 con 0x7f700c074160 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.818+0000 7f700affd640 1 -- 192.168.123.105:0/3283936861 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6fe4003620 con 0x7f700c074160 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.819+0000 7f700affd640 1 -- 192.168.123.105:0/3283936861 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 4064229176 0 0) 0x7f6fe4003620 con 0x7f700c074160 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.819+0000 7f700affd640 1 -- 192.168.123.105:0/3283936861 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f700c112710 con 0x7f700c074160 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.819+0000 7f700affd640 1 -- 192.168.123.105:0/3283936861 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f6ffc002e10 con 0x7f700c074160 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.819+0000 7f700affd640 1 -- 192.168.123.105:0/3283936861 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f6ffc003520 con 0x7f700c074160 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.819+0000 7f700affd640 1 -- 192.168.123.105:0/3283936861 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f6ffc006280 con 0x7f700c074160 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.823+0000 7f7011032640 1 -- 192.168.123.105:0/3283936861 >> v1:192.168.123.105:6789/0 conn(0x7f700c074160 legacy=0x7f700c074560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.828+0000 7f7011032640 1 -- 192.168.123.105:0/3283936861 shutdown_connections 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.828+0000 7f7011032640 1 -- 192.168.123.105:0/3283936861 >> 192.168.123.105:0/3283936861 conn(0x7f700c06f4e0 msgr2=0x7f700c071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.828+0000 7f7011032640 1 -- 192.168.123.105:0/3283936861 shutdown_connections 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.828+0000 7f7011032640 1 -- 192.168.123.105:0/3283936861 wait complete. 2026-03-09T20:17:57.001 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.828+0000 7f7011032640 1 Processor -- start 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.829+0000 7f7011032640 1 -- start start 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.829+0000 7f7011032640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f700c1a2e20 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.829+0000 7f700bfff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f700c074160 0x7f700c1a2710 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51112/0 (socket says 192.168.123.105:51112) 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.829+0000 7f700bfff640 1 -- 192.168.123.105:0/1411401586 learned_addr learned my addr 192.168.123.105:0/1411401586 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.829+0000 7f70097fa640 1 -- 192.168.123.105:0/1411401586 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 432556055 0 0) 0x7f700c1a2e20 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.829+0000 7f70097fa640 1 -- 192.168.123.105:0/1411401586 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6fdc003620 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.829+0000 7f70097fa640 1 -- 192.168.123.105:0/1411401586 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3750805999 0 0) 0x7f6fdc003620 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.829+0000 7f70097fa640 1 -- 192.168.123.105:0/1411401586 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f700c1a2e20 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.830+0000 7f70097fa640 1 -- 192.168.123.105:0/1411401586 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f6ffc002890 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.830+0000 7f70097fa640 1 -- 192.168.123.105:0/1411401586 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3498136280 0 0) 0x7f700c1a2e20 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.830+0000 7f70097fa640 1 -- 192.168.123.105:0/1411401586 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f700c1a2ff0 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.830+0000 7f7011032640 1 -- 192.168.123.105:0/1411401586 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f700c1a32a0 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.830+0000 7f7011032640 1 -- 192.168.123.105:0/1411401586 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f700c1a6e90 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.830+0000 7f70097fa640 1 -- 192.168.123.105:0/1411401586 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f6ffc004d10 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.830+0000 7f70097fa640 1 -- 192.168.123.105:0/1411401586 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f6ffc003150 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.831+0000 7f70097fa640 1 -- 192.168.123.105:0/1411401586 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 1) ==== 811+0+0 (unknown 4133961934 0 0) 0x7f6ffc0064c0 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.831+0000 7f70097fa640 1 -- 192.168.123.105:0/1411401586 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 1808929742 0 0) 0x7f6ffc007820 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.831+0000 7f7011032640 1 -- 192.168.123.105:0/1411401586 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f700c111ee0 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.834+0000 7f70097fa640 1 -- 192.168.123.105:0/1411401586 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (unknown 1092875540 0 4127419540) 0x7f6ffc004ec0 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.865+0000 7f7011032640 1 -- 192.168.123.105:0/1411401586 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f700c19b6c0 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.865+0000 7f70097fa640 1 -- 192.168.123.105:0/1411401586 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1291 (unknown 4201413639 0 2235513257) 0x7f6ffc007a70 con 0x7f700c074160 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.867+0000 7f6ff2ffd640 1 -- 192.168.123.105:0/1411401586 >> v1:192.168.123.105:6789/0 conn(0x7f700c074160 legacy=0x7f700c1a2710 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.867+0000 7f6ff2ffd640 1 -- 192.168.123.105:0/1411401586 shutdown_connections 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.867+0000 7f6ff2ffd640 1 -- 192.168.123.105:0/1411401586 >> 192.168.123.105:0/1411401586 conn(0x7f700c06f4e0 msgr2=0x7f700c071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.867+0000 7f6ff2ffd640 1 -- 192.168.123.105:0/1411401586 shutdown_connections 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:56.867+0000 7f6ff2ffd640 1 -- 192.168.123.105:0/1411401586 wait complete. 2026-03-09T20:17:57.002 INFO:teuthology.orchestra.run.vm05.stdout:mgr not available, waiting (1/15)... 2026-03-09T20:17:57.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2549523660' entity='client.admin' 2026-03-09T20:17:57.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1411401586' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:17:57.409 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:57 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:57.250+0000 7fcf70f91140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T20:17:57.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:57 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:57.584+0000 7fcf70f91140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T20:17:57.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:57 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T20:17:57.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:57 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T20:17:57.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:57 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: from numpy import show_config as show_numpy_config 2026-03-09T20:17:57.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:57 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:57.669+0000 7fcf70f91140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T20:17:57.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:57 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:57.703+0000 7fcf70f91140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T20:17:57.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:57 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:57.776+0000 7fcf70f91140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T20:17:58.531 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:58.276+0000 7fcf70f91140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T20:17:58.531 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:58.385+0000 7fcf70f91140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:17:58.531 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:58.424+0000 7fcf70f91140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T20:17:58.531 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:58.459+0000 7fcf70f91140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T20:17:58.532 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:58.497+0000 7fcf70f91140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T20:17:58.532 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:58.532+0000 7fcf70f91140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T20:17:58.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:58.696+0000 7fcf70f91140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T20:17:58.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:58.747+0000 7fcf70f91140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T20:17:59.230 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:17:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1939451897' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:17:59.230 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:58.964+0000 7fcf70f91140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsid": "c0151936-1bf4-11f1-b896-23f7bea8a6ea", 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 0 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T20:17:54:448734+0000", 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T20:17:59.319 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T20:17:54.449435+0000", 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.138+0000 7f3d08978640 1 Processor -- start 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.138+0000 7f3d08978640 1 -- start start 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.139+0000 7f3d08978640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3d04074770 con 0x7f3d04073bd0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.139+0000 7f3d02d76640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f3d04073bd0 0x7f3d04073fd0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51120/0 (socket says 192.168.123.105:51120) 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.139+0000 7f3d02d76640 1 -- 192.168.123.105:0/3920578899 learned_addr learned my addr 192.168.123.105:0/3920578899 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.140+0000 7f3d01d74640 1 -- 192.168.123.105:0/3920578899 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 183538397 0 0) 0x7f3d04074770 con 0x7f3d04073bd0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.140+0000 7f3d01d74640 1 -- 192.168.123.105:0/3920578899 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3ce8003620 con 0x7f3d04073bd0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.140+0000 7f3d01d74640 1 -- 192.168.123.105:0/3920578899 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 1015571869 0 0) 0x7f3ce8003620 con 0x7f3d04073bd0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.140+0000 7f3d01d74640 1 -- 192.168.123.105:0/3920578899 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3d0407d060 con 0x7f3d04073bd0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.140+0000 7f3d01d74640 1 -- 192.168.123.105:0/3920578899 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f3cf4002a70 con 0x7f3d04073bd0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.140+0000 7f3d01d74640 1 -- 192.168.123.105:0/3920578899 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f3cf4003140 con 0x7f3d04073bd0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.141+0000 7f3d08978640 1 -- 192.168.123.105:0/3920578899 >> v1:192.168.123.105:6789/0 conn(0x7f3d04073bd0 legacy=0x7f3d04073fd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.141+0000 7f3d08978640 1 -- 192.168.123.105:0/3920578899 shutdown_connections 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.141+0000 7f3d08978640 1 -- 192.168.123.105:0/3920578899 >> 192.168.123.105:0/3920578899 conn(0x7f3d0406f4e0 msgr2=0x7f3d04071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.141+0000 7f3d08978640 1 -- 192.168.123.105:0/3920578899 shutdown_connections 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.141+0000 7f3d08978640 1 -- 192.168.123.105:0/3920578899 wait complete. 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.141+0000 7f3d08978640 1 Processor -- start 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.142+0000 7f3d08978640 1 -- start start 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.142+0000 7f3d08978640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3d04086ae0 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.142+0000 7f3d02d76640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f3d040866c0 0x7f3d04089bc0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51130/0 (socket says 192.168.123.105:51130) 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.142+0000 7f3d02d76640 1 -- 192.168.123.105:0/1939451897 learned_addr learned my addr 192.168.123.105:0/1939451897 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.142+0000 7f3ce3fff640 1 -- 192.168.123.105:0/1939451897 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 278118485 0 0) 0x7f3d04086ae0 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.143+0000 7f3ce3fff640 1 -- 192.168.123.105:0/1939451897 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3ce4003620 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.143+0000 7f3ce3fff640 1 -- 192.168.123.105:0/1939451897 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1919016141 0 0) 0x7f3ce4003620 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.143+0000 7f3ce3fff640 1 -- 192.168.123.105:0/1939451897 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f3d04086ae0 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.143+0000 7f3ce3fff640 1 -- 192.168.123.105:0/1939451897 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f3cf4004e90 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.143+0000 7f3ce3fff640 1 -- 192.168.123.105:0/1939451897 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2997480090 0 0) 0x7f3d04086ae0 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.143+0000 7f3ce3fff640 1 -- 192.168.123.105:0/1939451897 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3d04086cb0 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.143+0000 7f3d08978640 1 -- 192.168.123.105:0/1939451897 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f3d04086fc0 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.144+0000 7f3d08978640 1 -- 192.168.123.105:0/1939451897 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f3d041b8630 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.144+0000 7f3d08978640 1 -- 192.168.123.105:0/1939451897 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3d0407cc50 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.144+0000 7f3ce3fff640 1 -- 192.168.123.105:0/1939451897 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f3cf4005720 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.144+0000 7f3ce3fff640 1 -- 192.168.123.105:0/1939451897 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f3cf4005d50 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.145+0000 7f3ce3fff640 1 -- 192.168.123.105:0/1939451897 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 1) ==== 811+0+0 (unknown 4133961934 0 0) 0x7f3cf40073b0 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.145+0000 7f3ce3fff640 1 -- 192.168.123.105:0/1939451897 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 1808929742 0 0) 0x7f3cf40069e0 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.146+0000 7f3ce3fff640 1 -- 192.168.123.105:0/1939451897 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (unknown 1092875540 0 4127419540) 0x7f3cf4006d50 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.177+0000 7f3d08978640 1 -- 192.168.123.105:0/1939451897 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f3d0407f710 con 0x7f3d040866c0 2026-03-09T20:17:59.320 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.177+0000 7f3ce3fff640 1 -- 192.168.123.105:0/1939451897 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1291 (unknown 4201413639 0 2189461936) 0x7f3cf4005f10 con 0x7f3d040866c0 2026-03-09T20:17:59.321 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.184+0000 7f3ce1ffb640 1 -- 192.168.123.105:0/1939451897 >> v1:192.168.123.105:6789/0 conn(0x7f3d040866c0 legacy=0x7f3d04089bc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:17:59.321 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.184+0000 7f3ce1ffb640 1 -- 192.168.123.105:0/1939451897 shutdown_connections 2026-03-09T20:17:59.321 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.184+0000 7f3ce1ffb640 1 -- 192.168.123.105:0/1939451897 >> 192.168.123.105:0/1939451897 conn(0x7f3d0406f4e0 msgr2=0x7f3d0407b860 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:17:59.321 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.184+0000 7f3ce1ffb640 1 -- 192.168.123.105:0/1939451897 shutdown_connections 2026-03-09T20:17:59.321 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:17:59.184+0000 7f3ce1ffb640 1 -- 192.168.123.105:0/1939451897 wait complete. 2026-03-09T20:17:59.321 INFO:teuthology.orchestra.run.vm05.stdout:mgr not available, waiting (2/15)... 2026-03-09T20:17:59.479 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:59 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:59.283+0000 7fcf70f91140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T20:17:59.479 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:59 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:59.329+0000 7fcf70f91140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T20:17:59.479 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:59 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:59.369+0000 7fcf70f91140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T20:17:59.480 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:59 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:59.444+0000 7fcf70f91140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T20:17:59.811 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:59 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:59.480+0000 7fcf70f91140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T20:17:59.811 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:59 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:59.556+0000 7fcf70f91140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T20:17:59.811 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:59 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:59.671+0000 7fcf70f91140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:18:00.159 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:59 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:59.811+0000 7fcf70f91140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T20:18:00.159 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:17:59 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:17:59.851+0000 7fcf70f91140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T20:18:00.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:00 vm05 ceph-mon[51870]: Activating manager daemon y 2026-03-09T20:18:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:00 vm05 ceph-mon[51870]: mgrmap e2: y(active, starting, since 0.00431275s) 2026-03-09T20:18:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:00 vm05 ceph-mon[51870]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:00 vm05 ceph-mon[51870]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:00 vm05 ceph-mon[51870]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:00 vm05 ceph-mon[51870]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:00 vm05 ceph-mon[51870]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T20:18:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:00 vm05 ceph-mon[51870]: Manager daemon y is now available 2026-03-09T20:18:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:00 vm05 ceph-mon[51870]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:18:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:00 vm05 ceph-mon[51870]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T20:18:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:00 vm05 ceph-mon[51870]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' 2026-03-09T20:18:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:00 vm05 ceph-mon[51870]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' 2026-03-09T20:18:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:00 vm05 ceph-mon[51870]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsid": "c0151936-1bf4-11f1-b896-23f7bea8a6ea", 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 0 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T20:18:01.724 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T20:17:54:448734+0000", 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T20:17:54.449435+0000", 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.457+0000 7f127e1cc640 1 Processor -- start 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.458+0000 7f127e1cc640 1 -- start start 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.458+0000 7f127e1cc640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f127810ab00 con 0x7f12781066d0 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.458+0000 7f12777fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f12781066d0 0x7f1278106ad0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51240/0 (socket says 192.168.123.105:51240) 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.458+0000 7f12777fe640 1 -- 192.168.123.105:0/431120254 learned_addr learned my addr 192.168.123.105:0/431120254 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.458+0000 7f12767fc640 1 -- 192.168.123.105:0/431120254 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3018650936 0 0) 0x7f127810ab00 con 0x7f12781066d0 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.458+0000 7f12767fc640 1 -- 192.168.123.105:0/431120254 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f1254003620 con 0x7f12781066d0 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.459+0000 7f12767fc640 1 -- 192.168.123.105:0/431120254 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 1926041525 0 0) 0x7f1254003620 con 0x7f12781066d0 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.459+0000 7f12767fc640 1 -- 192.168.123.105:0/431120254 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f127810bce0 con 0x7f12781066d0 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.459+0000 7f12767fc640 1 -- 192.168.123.105:0/431120254 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f1268002e10 con 0x7f12781066d0 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.459+0000 7f12767fc640 1 -- 192.168.123.105:0/431120254 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f12680034a0 con 0x7f12781066d0 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.459+0000 7f12767fc640 1 -- 192.168.123.105:0/431120254 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f12680057e0 con 0x7f12781066d0 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.459+0000 7f127e1cc640 1 -- 192.168.123.105:0/431120254 >> v1:192.168.123.105:6789/0 conn(0x7f12781066d0 legacy=0x7f1278106ad0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:01.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.459+0000 7f127e1cc640 1 -- 192.168.123.105:0/431120254 shutdown_connections 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.459+0000 7f127e1cc640 1 -- 192.168.123.105:0/431120254 >> 192.168.123.105:0/431120254 conn(0x7f1278101e40 msgr2=0x7f12781042a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.459+0000 7f127e1cc640 1 -- 192.168.123.105:0/431120254 shutdown_connections 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.459+0000 7f127e1cc640 1 -- 192.168.123.105:0/431120254 wait complete. 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.460+0000 7f127e1cc640 1 Processor -- start 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.460+0000 7f127e1cc640 1 -- start start 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.460+0000 7f12777fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f12781066d0 0x7f1278073c60 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51254/0 (socket says 192.168.123.105:51254) 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.460+0000 7f12777fe640 1 -- 192.168.123.105:0/393516629 learned_addr learned my addr 192.168.123.105:0/393516629 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.460+0000 7f127e1cc640 1 -- 192.168.123.105:0/393516629 --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f1278074370 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.460+0000 7f1274ff9640 1 -- 192.168.123.105:0/393516629 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3805657520 0 0) 0x7f1278074370 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.461+0000 7f1274ff9640 1 -- 192.168.123.105:0/393516629 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f1250003620 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.461+0000 7f1274ff9640 1 -- 192.168.123.105:0/393516629 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1424895643 0 0) 0x7f1250003620 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.461+0000 7f1274ff9640 1 -- 192.168.123.105:0/393516629 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f1278074370 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.461+0000 7f1274ff9640 1 -- 192.168.123.105:0/393516629 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f1268002890 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.461+0000 7f1274ff9640 1 -- 192.168.123.105:0/393516629 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3827572206 0 0) 0x7f1278074370 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.461+0000 7f1274ff9640 1 -- 192.168.123.105:0/393516629 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f1278074540 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.461+0000 7f127e1cc640 1 -- 192.168.123.105:0/393516629 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f1278074850 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.461+0000 7f127e1cc640 1 -- 192.168.123.105:0/393516629 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f12781afb70 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.462+0000 7f1274ff9640 1 -- 192.168.123.105:0/393516629 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f1268004b90 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.462+0000 7f1274ff9640 1 -- 192.168.123.105:0/393516629 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f1268005de0 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.462+0000 7f1274ff9640 1 -- 192.168.123.105:0/393516629 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 3) ==== 50095+0+0 (unknown 1883118976 0 0) 0x7f1268012400 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.463+0000 7f1274ff9640 1 -- 192.168.123.105:0/393516629 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 1808929742 0 0) 0x7f126804dd80 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.463+0000 7f127e1cc640 1 -- 192.168.123.105:0/393516629 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f123c005180 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.466+0000 7f1274ff9640 1 -- 192.168.123.105:0/393516629 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f12680188c0 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.590+0000 7f127e1cc640 1 -- 192.168.123.105:0/393516629 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f123c005470 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.591+0000 7f1274ff9640 1 -- 192.168.123.105:0/393516629 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1290 (unknown 4201413639 0 1348260465) 0x7f12680181c0 con 0x7f12781066d0 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.594+0000 7f127e1cc640 1 -- 192.168.123.105:0/393516629 >> v1:192.168.123.105:6800/4277841438 conn(0x7f125003e740 legacy=0x7f1250040c00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.594+0000 7f127e1cc640 1 -- 192.168.123.105:0/393516629 >> v1:192.168.123.105:6789/0 conn(0x7f12781066d0 legacy=0x7f1278073c60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.595+0000 7f127e1cc640 1 -- 192.168.123.105:0/393516629 shutdown_connections 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.595+0000 7f127e1cc640 1 -- 192.168.123.105:0/393516629 >> 192.168.123.105:0/393516629 conn(0x7f1278101e40 msgr2=0x7f12781042a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.595+0000 7f127e1cc640 1 -- 192.168.123.105:0/393516629 shutdown_connections 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.595+0000 7f127e1cc640 1 -- 192.168.123.105:0/393516629 wait complete. 2026-03-09T20:18:01.726 INFO:teuthology.orchestra.run.vm05.stdout:mgr is available 2026-03-09T20:18:02.110 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:01 vm05 ceph-mon[51870]: mgrmap e3: y(active, since 1.00873s) 2026-03-09T20:18:02.110 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/393516629' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout fsid = c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_host = [v1:192.168.123.105:6789] 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.853+0000 7f8b1b414640 1 Processor -- start 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.854+0000 7f8b1b414640 1 -- start start 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.854+0000 7f8b1b414640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8b1410ab00 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.854+0000 7f8b19189640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f8b141066d0 0x7f8b14106ad0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51260/0 (socket says 192.168.123.105:51260) 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.854+0000 7f8b19189640 1 -- 192.168.123.105:0/3557078233 learned_addr learned my addr 192.168.123.105:0/3557078233 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.855+0000 7f8b03fff640 1 -- 192.168.123.105:0/3557078233 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3043909083 0 0) 0x7f8b1410ab00 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.855+0000 7f8b03fff640 1 -- 192.168.123.105:0/3557078233 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8af8003620 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.855+0000 7f8b03fff640 1 -- 192.168.123.105:0/3557078233 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 1973083387 0 0) 0x7f8af8003620 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.855+0000 7f8b03fff640 1 -- 192.168.123.105:0/3557078233 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8b1410bce0 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.855+0000 7f8b03fff640 1 -- 192.168.123.105:0/3557078233 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f8b08002e10 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.855+0000 7f8b03fff640 1 -- 192.168.123.105:0/3557078233 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f8b080033e0 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.855+0000 7f8b03fff640 1 -- 192.168.123.105:0/3557078233 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f8b08005780 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.856+0000 7f8b1b414640 1 -- 192.168.123.105:0/3557078233 >> v1:192.168.123.105:6789/0 conn(0x7f8b141066d0 legacy=0x7f8b14106ad0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.856+0000 7f8b1b414640 1 -- 192.168.123.105:0/3557078233 shutdown_connections 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.856+0000 7f8b1b414640 1 -- 192.168.123.105:0/3557078233 >> 192.168.123.105:0/3557078233 conn(0x7f8b14101e40 msgr2=0x7f8b141042a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.856+0000 7f8b1b414640 1 -- 192.168.123.105:0/3557078233 shutdown_connections 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.856+0000 7f8b1b414640 1 -- 192.168.123.105:0/3557078233 wait complete. 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.857+0000 7f8b1b414640 1 Processor -- start 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.857+0000 7f8b1b414640 1 -- start start 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.858+0000 7f8b1b414640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8b141a2360 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.858+0000 7f8b19189640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f8b141066d0 0x7f8b1407c480 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51262/0 (socket says 192.168.123.105:51262) 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.858+0000 7f8b19189640 1 -- 192.168.123.105:0/540394090 learned_addr learned my addr 192.168.123.105:0/540394090 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.859+0000 7f8b027fc640 1 -- 192.168.123.105:0/540394090 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3013454072 0 0) 0x7f8b141a2360 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.859+0000 7f8b027fc640 1 -- 192.168.123.105:0/540394090 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8ae8003620 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.862+0000 7f8b027fc640 1 -- 192.168.123.105:0/540394090 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1201603336 0 0) 0x7f8ae8003620 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.862+0000 7f8b027fc640 1 -- 192.168.123.105:0/540394090 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f8b141a2360 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.862+0000 7f8b027fc640 1 -- 192.168.123.105:0/540394090 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f8b08002890 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.866+0000 7f8b027fc640 1 -- 192.168.123.105:0/540394090 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2163019493 0 0) 0x7f8b141a2360 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.866+0000 7f8b027fc640 1 -- 192.168.123.105:0/540394090 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8b141a3540 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.866+0000 7f8b1b414640 1 -- 192.168.123.105:0/540394090 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f8b141a2530 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.866+0000 7f8b1b414640 1 -- 192.168.123.105:0/540394090 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f8b141a2a70 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.866+0000 7f8b027fc640 1 -- 192.168.123.105:0/540394090 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f8b08004bd0 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.866+0000 7f8b027fc640 1 -- 192.168.123.105:0/540394090 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f8b08005e80 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.867+0000 7f8b027fc640 1 -- 192.168.123.105:0/540394090 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 4) ==== 50201+0+0 (unknown 308183859 0 0) 0x7f8b08012450 con 0x7f8b141066d0 2026-03-09T20:18:02.116 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.867+0000 7f8b027fc640 1 -- 192.168.123.105:0/540394090 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 1808929742 0 0) 0x7f8b0804e0f0 con 0x7f8b141066d0 2026-03-09T20:18:02.117 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.868+0000 7f8b1b414640 1 -- 192.168.123.105:0/540394090 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8b1410b8a0 con 0x7f8b141066d0 2026-03-09T20:18:02.117 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.871+0000 7f8b027fc640 1 -- 192.168.123.105:0/540394090 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f8b08018870 con 0x7f8b141066d0 2026-03-09T20:18:02.117 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.961+0000 7f8b1b414640 1 -- 192.168.123.105:0/540394090 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "config assimilate-conf"} v 0) -- 0x7f8b141a2d60 con 0x7f8b141066d0 2026-03-09T20:18:02.117 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.963+0000 7f8b027fc640 1 -- 192.168.123.105:0/540394090 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v3) ==== 70+0+356 (unknown 1187553405 0 2201627273) 0x7f8b08018170 con 0x7f8b141066d0 2026-03-09T20:18:02.117 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.965+0000 7f8b1b414640 1 -- 192.168.123.105:0/540394090 >> v1:192.168.123.105:6800/4277841438 conn(0x7f8ae803eaf0 legacy=0x7f8ae8040fb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:02.117 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.965+0000 7f8b1b414640 1 -- 192.168.123.105:0/540394090 >> v1:192.168.123.105:6789/0 conn(0x7f8b141066d0 legacy=0x7f8b1407c480 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:02.117 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.966+0000 7f8b1b414640 1 -- 192.168.123.105:0/540394090 shutdown_connections 2026-03-09T20:18:02.117 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.966+0000 7f8b1b414640 1 -- 192.168.123.105:0/540394090 >> 192.168.123.105:0/540394090 conn(0x7f8b14101e40 msgr2=0x7f8b141042a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:02.117 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.966+0000 7f8b1b414640 1 -- 192.168.123.105:0/540394090 shutdown_connections 2026-03-09T20:18:02.117 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:01.966+0000 7f8b1b414640 1 -- 192.168.123.105:0/540394090 wait complete. 2026-03-09T20:18:02.117 INFO:teuthology.orchestra.run.vm05.stdout:Enabling cephadm module... 2026-03-09T20:18:03.187 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:02 vm05 ceph-mon[51870]: mgrmap e4: y(active, since 2s) 2026-03-09T20:18:03.187 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/540394090' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T20:18:03.187 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/243839898' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T20:18:03.187 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:03 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ignoring --setuser ceph since I am not root 2026-03-09T20:18:03.187 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:03 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ignoring --setgroup ceph since I am not root 2026-03-09T20:18:03.187 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:03 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:03.187+0000 7ff789375140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T20:18:03.212 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.247+0000 7f151381d640 1 Processor -- start 2026-03-09T20:18:03.212 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.248+0000 7f151381d640 1 -- start start 2026-03-09T20:18:03.212 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.248+0000 7f151381d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f150c10cd80 con 0x7f150c108950 2026-03-09T20:18:03.212 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.248+0000 7f1511592640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f150c108950 0x7f150c108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51276/0 (socket says 192.168.123.105:51276) 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.248+0000 7f1511592640 1 -- 192.168.123.105:0/3696030662 learned_addr learned my addr 192.168.123.105:0/3696030662 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.248+0000 7f14fbfff640 1 -- 192.168.123.105:0/3696030662 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1850796456 0 0) 0x7f150c10cd80 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.248+0000 7f14fbfff640 1 -- 192.168.123.105:0/3696030662 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f14f0003620 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.248+0000 7f14fbfff640 1 -- 192.168.123.105:0/3696030662 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 1061915956 0 0) 0x7f14f0003620 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.248+0000 7f14fbfff640 1 -- 192.168.123.105:0/3696030662 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f150c10df60 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.249+0000 7f14fbfff640 1 -- 192.168.123.105:0/3696030662 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f1500002e10 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.249+0000 7f14fbfff640 1 -- 192.168.123.105:0/3696030662 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f15000034a0 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.249+0000 7f151381d640 1 -- 192.168.123.105:0/3696030662 >> v1:192.168.123.105:6789/0 conn(0x7f150c108950 legacy=0x7f150c108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.249+0000 7f151381d640 1 -- 192.168.123.105:0/3696030662 shutdown_connections 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.249+0000 7f151381d640 1 -- 192.168.123.105:0/3696030662 >> 192.168.123.105:0/3696030662 conn(0x7f150c07bdf0 msgr2=0x7f150c07c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.249+0000 7f151381d640 1 -- 192.168.123.105:0/3696030662 shutdown_connections 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.249+0000 7f151381d640 1 -- 192.168.123.105:0/3696030662 wait complete. 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.250+0000 7f151381d640 1 Processor -- start 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.250+0000 7f151381d640 1 -- start start 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.250+0000 7f151381d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f150c19ebc0 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.250+0000 7f1511592640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f150c108950 0x7f150c19e4b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51288/0 (socket says 192.168.123.105:51288) 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.250+0000 7f1511592640 1 -- 192.168.123.105:0/243839898 learned_addr learned my addr 192.168.123.105:0/243839898 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.250+0000 7f14fa7fc640 1 -- 192.168.123.105:0/243839898 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 571839866 0 0) 0x7f150c19ebc0 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.250+0000 7f14fa7fc640 1 -- 192.168.123.105:0/243839898 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f14e0003620 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.250+0000 7f14fa7fc640 1 -- 192.168.123.105:0/243839898 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2758452372 0 0) 0x7f14e0003620 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.250+0000 7f14fa7fc640 1 -- 192.168.123.105:0/243839898 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f150c19ebc0 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.250+0000 7f14fa7fc640 1 -- 192.168.123.105:0/243839898 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f15000031f0 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.250+0000 7f14fa7fc640 1 -- 192.168.123.105:0/243839898 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 974909240 0 0) 0x7f150c19ebc0 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.251+0000 7f14fa7fc640 1 -- 192.168.123.105:0/243839898 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f150c19ed90 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.251+0000 7f151381d640 1 -- 192.168.123.105:0/243839898 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f150c19f0a0 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.251+0000 7f151381d640 1 -- 192.168.123.105:0/243839898 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f150c1a2bb0 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.251+0000 7f14fa7fc640 1 -- 192.168.123.105:0/243839898 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f15000028b0 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.251+0000 7f14fa7fc640 1 -- 192.168.123.105:0/243839898 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f1500006400 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.251+0000 7f14fa7fc640 1 -- 192.168.123.105:0/243839898 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 4) ==== 50201+0+0 (unknown 308183859 0 0) 0x7f15000076d0 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.252+0000 7f14fa7fc640 1 -- 192.168.123.105:0/243839898 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 1808929742 0 0) 0x7f150004e0a0 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.253+0000 7f151381d640 1 -- 192.168.123.105:0/243839898 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f150c10d9d0 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.255+0000 7f14fa7fc640 1 -- 192.168.123.105:0/243839898 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f1500018920 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:02.370+0000 7f151381d640 1 -- 192.168.123.105:0/243839898 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) -- 0x7f150c1a3120 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.072+0000 7f14fa7fc640 1 -- 192.168.123.105:0/243839898 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "cephadm"}]=0 v5) ==== 86+0+0 (unknown 2263024820 0 0) 0x7f1500018220 con 0x7f150c108950 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.075+0000 7f151381d640 1 -- 192.168.123.105:0/243839898 >> v1:192.168.123.105:6800/4277841438 conn(0x7f14e003ecd0 legacy=0x7f14e0041190 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.075+0000 7f151381d640 1 -- 192.168.123.105:0/243839898 >> v1:192.168.123.105:6789/0 conn(0x7f150c108950 legacy=0x7f150c19e4b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.078+0000 7f151381d640 1 -- 192.168.123.105:0/243839898 shutdown_connections 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.078+0000 7f151381d640 1 -- 192.168.123.105:0/243839898 >> 192.168.123.105:0/243839898 conn(0x7f150c07bdf0 msgr2=0x7f150c105790 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.078+0000 7f151381d640 1 -- 192.168.123.105:0/243839898 shutdown_connections 2026-03-09T20:18:03.213 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.078+0000 7f151381d640 1 -- 192.168.123.105:0/243839898 wait complete. 2026-03-09T20:18:03.487 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:03 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:03.243+0000 7ff789375140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T20:18:03.636 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.360+0000 7f9737fff640 1 Processor -- start 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.360+0000 7f9737fff640 1 -- start start 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.360+0000 7f9737fff640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9738111530 con 0x7f9738074160 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.361+0000 7f9736ffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f9738074160 0x7f9738074560 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51328/0 (socket says 192.168.123.105:51328) 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.361+0000 7f9736ffd640 1 -- 192.168.123.105:0/4281946785 learned_addr learned my addr 192.168.123.105:0/4281946785 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.361+0000 7f9735ffb640 1 -- 192.168.123.105:0/4281946785 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 725919849 0 0) 0x7f9738111530 con 0x7f9738074160 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.362+0000 7f9735ffb640 1 -- 192.168.123.105:0/4281946785 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f971c003620 con 0x7f9738074160 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.362+0000 7f9735ffb640 1 -- 192.168.123.105:0/4281946785 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 1282641588 0 0) 0x7f971c003620 con 0x7f9738074160 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.362+0000 7f9735ffb640 1 -- 192.168.123.105:0/4281946785 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9738112710 con 0x7f9738074160 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.362+0000 7f9735ffb640 1 -- 192.168.123.105:0/4281946785 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f9728002e10 con 0x7f9738074160 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.362+0000 7f9735ffb640 1 -- 192.168.123.105:0/4281946785 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f97280033e0 con 0x7f9738074160 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.363+0000 7f9737fff640 1 -- 192.168.123.105:0/4281946785 >> v1:192.168.123.105:6789/0 conn(0x7f9738074160 legacy=0x7f9738074560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.363+0000 7f9737fff640 1 -- 192.168.123.105:0/4281946785 shutdown_connections 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.363+0000 7f9737fff640 1 -- 192.168.123.105:0/4281946785 >> 192.168.123.105:0/4281946785 conn(0x7f973806f4e0 msgr2=0x7f9738071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.363+0000 7f9737fff640 1 -- 192.168.123.105:0/4281946785 shutdown_connections 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.363+0000 7f9737fff640 1 -- 192.168.123.105:0/4281946785 wait complete. 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.363+0000 7f9737fff640 1 Processor -- start 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.363+0000 7f9737fff640 1 -- start start 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.364+0000 7f9737fff640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9738115fb0 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.364+0000 7f9736ffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f9738115b90 0x7f9738114300 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51330/0 (socket says 192.168.123.105:51330) 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.364+0000 7f9736ffd640 1 -- 192.168.123.105:0/1745205018 learned_addr learned my addr 192.168.123.105:0/1745205018 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.364+0000 7f9717fff640 1 -- 192.168.123.105:0/1745205018 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 732267826 0 0) 0x7f9738115fb0 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.364+0000 7f9717fff640 1 -- 192.168.123.105:0/1745205018 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f970c003620 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.365+0000 7f9717fff640 1 -- 192.168.123.105:0/1745205018 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3153329319 0 0) 0x7f970c003620 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.365+0000 7f9717fff640 1 -- 192.168.123.105:0/1745205018 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f9738115fb0 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.365+0000 7f9717fff640 1 -- 192.168.123.105:0/1745205018 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f9728003170 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.365+0000 7f9717fff640 1 -- 192.168.123.105:0/1745205018 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2007770209 0 0) 0x7f9738115fb0 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.365+0000 7f9717fff640 1 -- 192.168.123.105:0/1745205018 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9738116180 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.365+0000 7f9737fff640 1 -- 192.168.123.105:0/1745205018 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f9738114a10 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.365+0000 7f9737fff640 1 -- 192.168.123.105:0/1745205018 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f9738114f50 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.366+0000 7f9717fff640 1 -- 192.168.123.105:0/1745205018 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f9728003410 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.366+0000 7f9717fff640 1 -- 192.168.123.105:0/1745205018 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f9728005cd0 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.368+0000 7f9737fff640 1 -- 192.168.123.105:0/1745205018 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9704005180 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.368+0000 7f9717fff640 1 -- 192.168.123.105:0/1745205018 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 5) ==== 50212+0+0 (unknown 1406563671 0 0) 0x7f9728004b90 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.368+0000 7f97367fc640 1 -- 192.168.123.105:0/1745205018 >> v1:192.168.123.105:6800/4277841438 conn(0x7f970c03ec60 legacy=0x7f970c041120 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/4277841438 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.368+0000 7f9717fff640 1 -- 192.168.123.105:0/1745205018 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 1808929742 0 0) 0x7f972804d160 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.371+0000 7f9717fff640 1 -- 192.168.123.105:0/1745205018 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f97280178e0 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.487+0000 7f9737fff640 1 -- 192.168.123.105:0/1745205018 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mgr stat"} v 0) -- 0x7f9704005d40 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.489+0000 7f9717fff640 1 -- 192.168.123.105:0/1745205018 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v5) ==== 56+0+88 (unknown 3768197548 0 15966916) 0x7f97280171e0 con 0x7f9738115b90 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.493+0000 7f9715ffb640 1 -- 192.168.123.105:0/1745205018 >> v1:192.168.123.105:6800/4277841438 conn(0x7f970c03ec60 legacy=0x7f970c041120 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.493+0000 7f9715ffb640 1 -- 192.168.123.105:0/1745205018 >> v1:192.168.123.105:6789/0 conn(0x7f9738115b90 legacy=0x7f9738114300 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:03.637 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.493+0000 7f9715ffb640 1 -- 192.168.123.105:0/1745205018 shutdown_connections 2026-03-09T20:18:03.638 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.493+0000 7f9715ffb640 1 -- 192.168.123.105:0/1745205018 >> 192.168.123.105:0/1745205018 conn(0x7f973806f4e0 msgr2=0x7f9738070130 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:03.638 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.493+0000 7f9715ffb640 1 -- 192.168.123.105:0/1745205018 shutdown_connections 2026-03-09T20:18:03.638 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.493+0000 7f9715ffb640 1 -- 192.168.123.105:0/1745205018 wait complete. 2026-03-09T20:18:03.638 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for the mgr to restart... 2026-03-09T20:18:03.638 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr epoch 5... 2026-03-09T20:18:03.740 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:03 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:03.686+0000 7ff789375140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T20:18:04.395 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/243839898' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T20:18:04.395 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:04 vm05 ceph-mon[51870]: mgrmap e5: y(active, since 3s) 2026-03-09T20:18:04.395 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1745205018' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:18:04.395 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:04 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:04.022+0000 7ff789375140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T20:18:04.395 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:04 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T20:18:04.395 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:04 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T20:18:04.395 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:04 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: from numpy import show_config as show_numpy_config 2026-03-09T20:18:04.395 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:04 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:04.106+0000 7ff789375140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T20:18:04.395 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:04 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:04.143+0000 7ff789375140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T20:18:04.395 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:04 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:04.210+0000 7ff789375140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T20:18:04.996 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:04 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:04.737+0000 7ff789375140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T20:18:04.996 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:04 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:04.847+0000 7ff789375140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:18:04.996 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:04 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:04.886+0000 7ff789375140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T20:18:04.996 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:04 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:04.919+0000 7ff789375140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T20:18:04.996 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:04 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:04.958+0000 7ff789375140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T20:18:05.409 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:04 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:04.995+0000 7ff789375140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T20:18:05.410 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:05 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:05.174+0000 7ff789375140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T20:18:05.410 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:05 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:05.226+0000 7ff789375140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T20:18:05.725 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:05 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:05.444+0000 7ff789375140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T20:18:06.005 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:05 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:05.726+0000 7ff789375140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T20:18:06.005 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:05 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:05.766+0000 7ff789375140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T20:18:06.005 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:05 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:05.810+0000 7ff789375140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T20:18:06.005 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:05 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:05.888+0000 7ff789375140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T20:18:06.005 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:05 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:05.926+0000 7ff789375140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T20:18:06.272 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:06.005+0000 7ff789375140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T20:18:06.272 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:06.119+0000 7ff789375140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:18:06.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:06 vm05 ceph-mon[51870]: Active manager daemon y restarted 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:06 vm05 ceph-mon[51870]: Activating manager daemon y 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:06 vm05 ceph-mon[51870]: osdmap e2: 0 total, 0 up, 0 in 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:06 vm05 ceph-mon[51870]: mgrmap e6: y(active, starting, since 0.0406857s) 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:06 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:06 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:06 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:06 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:06 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:06 vm05 ceph-mon[51870]: Manager daemon y is now available 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:06 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:06 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:06 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:06.272+0000 7ff789375140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T20:18:06.660 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:06.312+0000 7ff789375140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T20:18:07.770 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:07 vm05 ceph-mon[51870]: Found migration_current of "None". Setting to last migration. 2026-03-09T20:18:07.770 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:07 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:18:07.770 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:07 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:07.770 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:07 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.784+0000 7fc15cec8640 1 Processor -- start 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.784+0000 7fc15cec8640 1 -- start start 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.784+0000 7fc15cec8640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fc158111530 con 0x7fc158074160 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.784+0000 7fc1577fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fc158074160 0x7fc158074560 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51346/0 (socket says 192.168.123.105:51346) 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.784+0000 7fc1577fe640 1 -- 192.168.123.105:0/291544617 learned_addr learned my addr 192.168.123.105:0/291544617 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.785+0000 7fc1567fc640 1 -- 192.168.123.105:0/291544617 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1068458683 0 0) 0x7fc158111530 con 0x7fc158074160 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.785+0000 7fc1567fc640 1 -- 192.168.123.105:0/291544617 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fc13c003620 con 0x7fc158074160 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.786+0000 7fc1567fc640 1 -- 192.168.123.105:0/291544617 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 2866654807 0 0) 0x7fc13c003620 con 0x7fc158074160 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.786+0000 7fc1567fc640 1 -- 192.168.123.105:0/291544617 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc158112710 con 0x7fc158074160 2026-03-09T20:18:07.782 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.786+0000 7fc1567fc640 1 -- 192.168.123.105:0/291544617 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fc148002e10 con 0x7fc158074160 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.786+0000 7fc1567fc640 1 -- 192.168.123.105:0/291544617 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fc1480033e0 con 0x7fc158074160 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.787+0000 7fc15cec8640 1 -- 192.168.123.105:0/291544617 >> v1:192.168.123.105:6789/0 conn(0x7fc158074160 legacy=0x7fc158074560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.787+0000 7fc15cec8640 1 -- 192.168.123.105:0/291544617 shutdown_connections 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.787+0000 7fc15cec8640 1 -- 192.168.123.105:0/291544617 >> 192.168.123.105:0/291544617 conn(0x7fc15806f4e0 msgr2=0x7fc158071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.787+0000 7fc15cec8640 1 -- 192.168.123.105:0/291544617 shutdown_connections 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.787+0000 7fc15cec8640 1 -- 192.168.123.105:0/291544617 wait complete. 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.788+0000 7fc15cec8640 1 Processor -- start 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.788+0000 7fc15cec8640 1 -- start start 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.788+0000 7fc15cec8640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fc1581137a0 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.788+0000 7fc1577fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fc158113380 0x7fc1581a5240 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51354/0 (socket says 192.168.123.105:51354) 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.788+0000 7fc1577fe640 1 -- 192.168.123.105:0/1517109885 learned_addr learned my addr 192.168.123.105:0/1517109885 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.790+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 90548984 0 0) 0x7fc1581137a0 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.790+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fc144003650 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.790+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3653865126 0 0) 0x7fc144003650 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.790+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fc1581137a0 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.791+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fc148003170 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.791+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3240521634 0 0) 0x7fc1581137a0 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.791+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc1581a7970 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.791+0000 7fc15cec8640 1 -- 192.168.123.105:0/1517109885 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fc1581a6960 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.791+0000 7fc15cec8640 1 -- 192.168.123.105:0/1517109885 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fc1581a6e40 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.792+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fc1480034b0 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.793+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fc148005c50 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.793+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 5) ==== 50212+0+0 (unknown 1406563671 0 0) 0x7fc148012770 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.793+0000 7fc156ffd640 1 -- 192.168.123.105:0/1517109885 >> v1:192.168.123.105:6800/4277841438 conn(0x7fc14403ed00 legacy=0x7fc1440411c0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/4277841438 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.793+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 1808929742 0 0) 0x7fc14804df50 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.793+0000 7fc15cec8640 1 -- 192.168.123.105:0/1517109885 --> v1:192.168.123.105:6800/4277841438 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7fc124000d10 con 0x7fc14403ed00 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:03.994+0000 7fc156ffd640 1 -- 192.168.123.105:0/1517109885 >> v1:192.168.123.105:6800/4277841438 conn(0x7fc14403ed00 legacy=0x7fc1440411c0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/4277841438 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:04.395+0000 7fc156ffd640 1 -- 192.168.123.105:0/1517109885 >> v1:192.168.123.105:6800/4277841438 conn(0x7fc14403ed00 legacy=0x7fc1440411c0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/4277841438 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:05.196+0000 7fc156ffd640 1 -- 192.168.123.105:0/1517109885 >> v1:192.168.123.105:6800/4277841438 conn(0x7fc14403ed00 legacy=0x7fc1440411c0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/4277841438 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:06.352+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mgrmap(e 6) ==== 50014+0+0 (unknown 570420025 0 0) 0x7fc14804c9c0 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:06.352+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 >> v1:192.168.123.105:6800/4277841438 conn(0x7fc14403ed00 legacy=0x7fc1440411c0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.575+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mgrmap(e 7) ==== 50106+0+0 (unknown 658845113 0 0) 0x7fc14804d440 con 0x7fc158113380 2026-03-09T20:18:07.783 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.575+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 --> v1:192.168.123.105:6800/1901557444 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7fc1480144b0 con 0x7fc144043450 2026-03-09T20:18:07.784 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.578+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 <== mgr.14118 v1:192.168.123.105:6800/1901557444 1 ==== command_reply(tid 0: 0 ) ==== 8+0+8901 (unknown 0 0 3832181493) 0x7fc1480144b0 con 0x7fc144043450 2026-03-09T20:18:07.784 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.582+0000 7fc15cec8640 1 -- 192.168.123.105:0/1517109885 --> v1:192.168.123.105:6800/1901557444 -- command(tid 1: {"prefix": "mgr_status"}) -- 0x7fc124002880 con 0x7fc144043450 2026-03-09T20:18:07.784 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.582+0000 7fc154ff9640 1 -- 192.168.123.105:0/1517109885 <== mgr.14118 v1:192.168.123.105:6800/1901557444 2 ==== command_reply(tid 1: 0 ) ==== 8+0+51 (unknown 0 0 96372106) 0x7fc124002880 con 0x7fc144043450 2026-03-09T20:18:07.784 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.582+0000 7fc15cec8640 1 -- 192.168.123.105:0/1517109885 >> v1:192.168.123.105:6800/1901557444 conn(0x7fc144043450 legacy=0x7fc144045840 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:07.784 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.582+0000 7fc15cec8640 1 -- 192.168.123.105:0/1517109885 >> v1:192.168.123.105:6789/0 conn(0x7fc158113380 legacy=0x7fc1581a5240 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:07.784 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.582+0000 7fc15cec8640 1 -- 192.168.123.105:0/1517109885 shutdown_connections 2026-03-09T20:18:07.784 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.582+0000 7fc15cec8640 1 -- 192.168.123.105:0/1517109885 >> 192.168.123.105:0/1517109885 conn(0x7fc15806f4e0 msgr2=0x7fc158110d70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:07.784 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.582+0000 7fc15cec8640 1 -- 192.168.123.105:0/1517109885 shutdown_connections 2026-03-09T20:18:07.784 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.582+0000 7fc15cec8640 1 -- 192.168.123.105:0/1517109885 wait complete. 2026-03-09T20:18:07.784 INFO:teuthology.orchestra.run.vm05.stdout:mgr epoch 5 is available 2026-03-09T20:18:07.784 INFO:teuthology.orchestra.run.vm05.stdout:Setting orchestrator backend to cephadm... 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.944+0000 7fdead81c640 1 Processor -- start 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.944+0000 7fdead81c640 1 -- start start 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.944+0000 7fdead81c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fdea8111530 con 0x7fdea8074160 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.945+0000 7fdeac81a640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fdea8074160 0x7fdea8074560 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51416/0 (socket says 192.168.123.105:51416) 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.945+0000 7fdeac81a640 1 -- 192.168.123.105:0/2535453649 learned_addr learned my addr 192.168.123.105:0/2535453649 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.945+0000 7fdea77fe640 1 -- 192.168.123.105:0/2535453649 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1112530535 0 0) 0x7fdea8111530 con 0x7fdea8074160 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.945+0000 7fdea77fe640 1 -- 192.168.123.105:0/2535453649 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fde94003620 con 0x7fdea8074160 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.945+0000 7fdea77fe640 1 -- 192.168.123.105:0/2535453649 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3139323864 0 0) 0x7fde94003620 con 0x7fdea8074160 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.945+0000 7fdea77fe640 1 -- 192.168.123.105:0/2535453649 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fdea8112710 con 0x7fdea8074160 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.945+0000 7fdea77fe640 1 -- 192.168.123.105:0/2535453649 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fde98002e10 con 0x7fdea8074160 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.946+0000 7fdea77fe640 1 -- 192.168.123.105:0/2535453649 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fde980033e0 con 0x7fdea8074160 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.946+0000 7fdea77fe640 1 -- 192.168.123.105:0/2535453649 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fde98005780 con 0x7fdea8074160 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.946+0000 7fdead81c640 1 -- 192.168.123.105:0/2535453649 >> v1:192.168.123.105:6789/0 conn(0x7fdea8074160 legacy=0x7fdea8074560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:08.407 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.947+0000 7fdead81c640 1 -- 192.168.123.105:0/2535453649 shutdown_connections 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.947+0000 7fdead81c640 1 -- 192.168.123.105:0/2535453649 >> 192.168.123.105:0/2535453649 conn(0x7fdea806f4e0 msgr2=0x7fdea8071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.947+0000 7fdead81c640 1 -- 192.168.123.105:0/2535453649 shutdown_connections 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.947+0000 7fdead81c640 1 -- 192.168.123.105:0/2535453649 wait complete. 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.947+0000 7fdead81c640 1 Processor -- start 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.948+0000 7fdead81c640 1 -- start start 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.948+0000 7fdead81c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fdea81ab880 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.948+0000 7fdeac81a640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fdea8074160 0x7fdea81ab170 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51432/0 (socket says 192.168.123.105:51432) 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.948+0000 7fdeac81a640 1 -- 192.168.123.105:0/2258206063 learned_addr learned my addr 192.168.123.105:0/2258206063 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.949+0000 7fdea5ffb640 1 -- 192.168.123.105:0/2258206063 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3921217460 0 0) 0x7fdea81ab880 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.949+0000 7fdea5ffb640 1 -- 192.168.123.105:0/2258206063 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fde88003620 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.949+0000 7fdea5ffb640 1 -- 192.168.123.105:0/2258206063 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1723516727 0 0) 0x7fde88003620 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.949+0000 7fdea5ffb640 1 -- 192.168.123.105:0/2258206063 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fdea81ab880 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.949+0000 7fdea5ffb640 1 -- 192.168.123.105:0/2258206063 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fde98002890 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.949+0000 7fdea5ffb640 1 -- 192.168.123.105:0/2258206063 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3726948133 0 0) 0x7fdea81ab880 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.950+0000 7fdea5ffb640 1 -- 192.168.123.105:0/2258206063 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fdea81aba50 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.950+0000 7fdead81c640 1 -- 192.168.123.105:0/2258206063 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fdea81abd60 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.950+0000 7fdead81c640 1 -- 192.168.123.105:0/2258206063 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fdea81af8f0 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.951+0000 7fdea5ffb640 1 -- 192.168.123.105:0/2258206063 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fde98004b90 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.951+0000 7fdea5ffb640 1 -- 192.168.123.105:0/2258206063 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fde98005f30 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.952+0000 7fdea5ffb640 1 -- 192.168.123.105:0/2258206063 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 7) ==== 50106+0+0 (unknown 658845113 0 0) 0x7fde98007200 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.952+0000 7fdea5ffb640 1 -- 192.168.123.105:0/2258206063 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 2608398624 0 0) 0x7fde9804df90 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.952+0000 7fdead81c640 1 -- 192.168.123.105:0/2258206063 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fde70005180 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:07.959+0000 7fdea5ffb640 1 -- 192.168.123.105:0/2258206063 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fde98018960 con 0x7fdea8074160 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.059+0000 7fdead81c640 1 -- 192.168.123.105:0/2258206063 --> v1:192.168.123.105:6800/1901557444 -- mgr_command(tid 0: {"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}) -- 0x7fde70002bf0 con 0x7fde8803ea90 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.112+0000 7fdea5ffb640 1 -- 192.168.123.105:0/2258206063 <== mgr.14118 v1:192.168.123.105:6800/1901557444 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (unknown 0 0 0) 0x7fde70002bf0 con 0x7fde8803ea90 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.115+0000 7fde777fe640 1 -- 192.168.123.105:0/2258206063 >> v1:192.168.123.105:6800/1901557444 conn(0x7fde8803ea90 legacy=0x7fde88040f50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.115+0000 7fde777fe640 1 -- 192.168.123.105:0/2258206063 >> v1:192.168.123.105:6789/0 conn(0x7fdea8074160 legacy=0x7fdea81ab170 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.115+0000 7fde777fe640 1 -- 192.168.123.105:0/2258206063 shutdown_connections 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.115+0000 7fde777fe640 1 -- 192.168.123.105:0/2258206063 >> 192.168.123.105:0/2258206063 conn(0x7fdea806f4e0 msgr2=0x7fdea8071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.115+0000 7fde777fe640 1 -- 192.168.123.105:0/2258206063 shutdown_connections 2026-03-09T20:18:08.408 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.115+0000 7fde777fe640 1 -- 192.168.123.105:0/2258206063 wait complete. 2026-03-09T20:18:08.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:08 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:08.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:08 vm05 ceph-mon[51870]: mgrmap e7: y(active, since 1.26275s) 2026-03-09T20:18:08.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:08 vm05 ceph-mon[51870]: from='client.14122 v1:192.168.123.105:0/1517109885' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:18:08.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:08 vm05 ceph-mon[51870]: from='client.14122 v1:192.168.123.105:0/1517109885' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:18:08.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:08 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:08.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:08 vm05 ceph-mon[51870]: from='client.14130 v1:192.168.123.105:0/2258206063' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:08.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:08 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:08.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:08 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:08.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:08 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.597+0000 7ffbe5a46640 1 Processor -- start 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.598+0000 7ffbe5a46640 1 -- start start 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.598+0000 7ffbe5a46640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ffbe010ab30 con 0x7ffbe0106700 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.599+0000 7ffbdeffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7ffbe0106700 0x7ffbe0106b00 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51434/0 (socket says 192.168.123.105:51434) 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.599+0000 7ffbdeffd640 1 -- 192.168.123.105:0/2200387936 learned_addr learned my addr 192.168.123.105:0/2200387936 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.599+0000 7ffbddffb640 1 -- 192.168.123.105:0/2200387936 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4202471216 0 0) 0x7ffbe010ab30 con 0x7ffbe0106700 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.599+0000 7ffbddffb640 1 -- 192.168.123.105:0/2200387936 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ffbcc003620 con 0x7ffbe0106700 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.599+0000 7ffbddffb640 1 -- 192.168.123.105:0/2200387936 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 22652854 0 0) 0x7ffbcc003620 con 0x7ffbe0106700 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.599+0000 7ffbddffb640 1 -- 192.168.123.105:0/2200387936 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ffbe010bd10 con 0x7ffbe0106700 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.599+0000 7ffbddffb640 1 -- 192.168.123.105:0/2200387936 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7ffbc8002e10 con 0x7ffbe0106700 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.599+0000 7ffbddffb640 1 -- 192.168.123.105:0/2200387936 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7ffbc80033e0 con 0x7ffbe0106700 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.599+0000 7ffbddffb640 1 -- 192.168.123.105:0/2200387936 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7ffbc8005780 con 0x7ffbe0106700 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.600+0000 7ffbe5a46640 1 -- 192.168.123.105:0/2200387936 >> v1:192.168.123.105:6789/0 conn(0x7ffbe0106700 legacy=0x7ffbe0106b00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.600+0000 7ffbe5a46640 1 -- 192.168.123.105:0/2200387936 shutdown_connections 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.600+0000 7ffbe5a46640 1 -- 192.168.123.105:0/2200387936 >> 192.168.123.105:0/2200387936 conn(0x7ffbe0101e90 msgr2=0x7ffbe01042d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.600+0000 7ffbe5a46640 1 -- 192.168.123.105:0/2200387936 shutdown_connections 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.600+0000 7ffbe5a46640 1 -- 192.168.123.105:0/2200387936 wait complete. 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.600+0000 7ffbe5a46640 1 Processor -- start 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.600+0000 7ffbe5a46640 1 -- start start 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.601+0000 7ffbe5a46640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ffbe019a6c0 con 0x7ffbe0106700 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.601+0000 7ffbdeffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7ffbe0106700 0x7ffbe0199fb0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51440/0 (socket says 192.168.123.105:51440) 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.601+0000 7ffbdeffd640 1 -- 192.168.123.105:0/3869147543 learned_addr learned my addr 192.168.123.105:0/3869147543 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.601+0000 7ffbe4a44640 1 -- 192.168.123.105:0/3869147543 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 826320583 0 0) 0x7ffbe019a6c0 con 0x7ffbe0106700 2026-03-09T20:18:08.839 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.601+0000 7ffbe4a44640 1 -- 192.168.123.105:0/3869147543 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ffbb4003620 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.601+0000 7ffbe4a44640 1 -- 192.168.123.105:0/3869147543 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 244792010 0 0) 0x7ffbb4003620 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.601+0000 7ffbe4a44640 1 -- 192.168.123.105:0/3869147543 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7ffbe019a6c0 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.601+0000 7ffbe4a44640 1 -- 192.168.123.105:0/3869147543 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7ffbc8002890 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.601+0000 7ffbe4a44640 1 -- 192.168.123.105:0/3869147543 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2556022931 0 0) 0x7ffbe019a6c0 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.601+0000 7ffbe4a44640 1 -- 192.168.123.105:0/3869147543 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ffbe019a890 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.602+0000 7ffbe5a46640 1 -- 192.168.123.105:0/3869147543 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7ffbe019aba0 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.602+0000 7ffbe5a46640 1 -- 192.168.123.105:0/3869147543 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7ffbe019e730 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.602+0000 7ffbe4a44640 1 -- 192.168.123.105:0/3869147543 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7ffbc8004bd0 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.602+0000 7ffbe4a44640 1 -- 192.168.123.105:0/3869147543 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7ffbc8005e80 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.603+0000 7ffbe4a44640 1 -- 192.168.123.105:0/3869147543 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 7) ==== 50106+0+0 (unknown 658845113 0 0) 0x7ffbc80124d0 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.603+0000 7ffbe5a46640 1 -- 192.168.123.105:0/3869147543 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ffbe010b8d0 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.603+0000 7ffbe4a44640 1 -- 192.168.123.105:0/3869147543 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 2608398624 0 0) 0x7ffbc8002c70 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.606+0000 7ffbe4a44640 1 -- 192.168.123.105:0/3869147543 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7ffbc80189e0 con 0x7ffbe0106700 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.695+0000 7ffbe5a46640 1 -- 192.168.123.105:0/3869147543 --> v1:192.168.123.105:6800/1901557444 -- mgr_command(tid 0: {"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}) -- 0x7ffbe0109db0 con 0x7ffbb403e9f0 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.695+0000 7ffbe4a44640 1 -- 192.168.123.105:0/3869147543 <== mgr.14118 v1:192.168.123.105:6800/1901557444 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+16 (unknown 0 0 2070689548) 0x7ffbe0109db0 con 0x7ffbb403e9f0 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.697+0000 7ffbe5a46640 1 -- 192.168.123.105:0/3869147543 >> v1:192.168.123.105:6800/1901557444 conn(0x7ffbb403e9f0 legacy=0x7ffbb4040eb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.698+0000 7ffbe5a46640 1 -- 192.168.123.105:0/3869147543 >> v1:192.168.123.105:6789/0 conn(0x7ffbe0106700 legacy=0x7ffbe0199fb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.698+0000 7ffbe5a46640 1 -- 192.168.123.105:0/3869147543 shutdown_connections 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.698+0000 7ffbe5a46640 1 -- 192.168.123.105:0/3869147543 >> 192.168.123.105:0/3869147543 conn(0x7ffbe0101e90 msgr2=0x7ffbe01042d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.698+0000 7ffbe5a46640 1 -- 192.168.123.105:0/3869147543 shutdown_connections 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.698+0000 7ffbe5a46640 1 -- 192.168.123.105:0/3869147543 wait complete. 2026-03-09T20:18:08.840 INFO:teuthology.orchestra.run.vm05.stdout:Generating ssh key... 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: Generating public/private ed25519 key pair. 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: Your identification has been saved in /tmp/tmpm11ff8fi/key 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: Your public key has been saved in /tmp/tmpm11ff8fi/key.pub 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: The key fingerprint is: 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: SHA256:eTi5bnJGhxUIqNYCZV90gd9O0kdHXovgKS+unh91+2k ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: The key's randomart image is: 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: +--[ED25519 256]--+ 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: | .o o+ooo . .. .| 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: |.. ...... o +.o..| 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: | . o. . + = o.. | 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: | + . o+B . | 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: | . . SB.+ . | 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: | ++= . . | 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: | ..+ . | 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: | ..* . .E.| 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: | .Oo. .o | 2026-03-09T20:18:09.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: +----[SHA256]-----+ 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.966+0000 7f91b486e640 1 Processor -- start 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.967+0000 7f91b486e640 1 -- start start 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.967+0000 7f91b486e640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f91ac10cd80 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.967+0000 7f91b25e3640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f91ac108950 0x7f91ac108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51452/0 (socket says 192.168.123.105:51452) 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.967+0000 7f91b25e3640 1 -- 192.168.123.105:0/2194598243 learned_addr learned my addr 192.168.123.105:0/2194598243 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.967+0000 7f91b15e1640 1 -- 192.168.123.105:0/2194598243 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2112779954 0 0) 0x7f91ac10cd80 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.968+0000 7f91b15e1640 1 -- 192.168.123.105:0/2194598243 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9188003620 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.968+0000 7f91b15e1640 1 -- 192.168.123.105:0/2194598243 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3531469865 0 0) 0x7f9188003620 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.968+0000 7f91b15e1640 1 -- 192.168.123.105:0/2194598243 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f91ac10df60 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.968+0000 7f91b15e1640 1 -- 192.168.123.105:0/2194598243 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f9198002e10 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.968+0000 7f91b15e1640 1 -- 192.168.123.105:0/2194598243 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f91980034a0 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.968+0000 7f91b486e640 1 -- 192.168.123.105:0/2194598243 >> v1:192.168.123.105:6789/0 conn(0x7f91ac108950 legacy=0x7f91ac108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.969+0000 7f91b486e640 1 -- 192.168.123.105:0/2194598243 shutdown_connections 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.969+0000 7f91b486e640 1 -- 192.168.123.105:0/2194598243 >> 192.168.123.105:0/2194598243 conn(0x7f91ac07bdf0 msgr2=0x7f91ac07c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.969+0000 7f91b486e640 1 -- 192.168.123.105:0/2194598243 shutdown_connections 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.969+0000 7f91b486e640 1 -- 192.168.123.105:0/2194598243 wait complete. 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.969+0000 7f91b486e640 1 Processor -- start 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.969+0000 7f91b486e640 1 -- start start 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.969+0000 7f91b486e640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f91ac19ebf0 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.970+0000 7f91b25e3640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f91ac108950 0x7f91ac19e4e0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51464/0 (socket says 192.168.123.105:51464) 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.970+0000 7f91b25e3640 1 -- 192.168.123.105:0/4106406019 learned_addr learned my addr 192.168.123.105:0/4106406019 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.970+0000 7f91977fe640 1 -- 192.168.123.105:0/4106406019 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3596557699 0 0) 0x7f91ac19ebf0 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.970+0000 7f91977fe640 1 -- 192.168.123.105:0/4106406019 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9180003620 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.970+0000 7f91977fe640 1 -- 192.168.123.105:0/4106406019 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 971941602 0 0) 0x7f9180003620 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.970+0000 7f91977fe640 1 -- 192.168.123.105:0/4106406019 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f91ac19ebf0 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.970+0000 7f91977fe640 1 -- 192.168.123.105:0/4106406019 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f9198003270 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.970+0000 7f91977fe640 1 -- 192.168.123.105:0/4106406019 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3093735210 0 0) 0x7f91ac19ebf0 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.970+0000 7f91977fe640 1 -- 192.168.123.105:0/4106406019 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f91ac19edc0 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.970+0000 7f91977fe640 1 -- 192.168.123.105:0/4106406019 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f9198002830 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.970+0000 7f91977fe640 1 -- 192.168.123.105:0/4106406019 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f9198006480 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.971+0000 7f91b486e640 1 -- 192.168.123.105:0/4106406019 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f91ac19f0d0 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.971+0000 7f91977fe640 1 -- 192.168.123.105:0/4106406019 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 7) ==== 50106+0+0 (unknown 658845113 0 0) 0x7f9198004b90 con 0x7f91ac108950 2026-03-09T20:18:09.220 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.971+0000 7f91b486e640 1 -- 192.168.123.105:0/4106406019 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f91ac1a2be0 con 0x7f91ac108950 2026-03-09T20:18:09.221 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.972+0000 7f91977fe640 1 -- 192.168.123.105:0/4106406019 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 2608398624 0 0) 0x7f919804d850 con 0x7f91ac108950 2026-03-09T20:18:09.221 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.972+0000 7f91b486e640 1 -- 192.168.123.105:0/4106406019 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f91ac10d9d0 con 0x7f91ac108950 2026-03-09T20:18:09.221 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:08.975+0000 7f91977fe640 1 -- 192.168.123.105:0/4106406019 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f91980180a0 con 0x7f91ac108950 2026-03-09T20:18:09.221 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.066+0000 7f91b486e640 1 -- 192.168.123.105:0/4106406019 --> v1:192.168.123.105:6800/1901557444 -- mgr_command(tid 0: {"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}) -- 0x7f91ac107450 con 0x7f918003ea90 2026-03-09T20:18:09.221 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.088+0000 7f91977fe640 1 -- 192.168.123.105:0/4106406019 <== mgr.14118 v1:192.168.123.105:6800/1901557444 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (unknown 0 0 0) 0x7f91ac107450 con 0x7f918003ea90 2026-03-09T20:18:09.221 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.090+0000 7f91b486e640 1 -- 192.168.123.105:0/4106406019 >> v1:192.168.123.105:6800/1901557444 conn(0x7f918003ea90 legacy=0x7f9180040f50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:09.221 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.090+0000 7f91b486e640 1 -- 192.168.123.105:0/4106406019 >> v1:192.168.123.105:6789/0 conn(0x7f91ac108950 legacy=0x7f91ac19e4e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:09.221 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.090+0000 7f91b486e640 1 -- 192.168.123.105:0/4106406019 shutdown_connections 2026-03-09T20:18:09.221 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.090+0000 7f91b486e640 1 -- 192.168.123.105:0/4106406019 >> 192.168.123.105:0/4106406019 conn(0x7f91ac07bdf0 msgr2=0x7f91ac1056b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:09.221 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.090+0000 7f91b486e640 1 -- 192.168.123.105:0/4106406019 shutdown_connections 2026-03-09T20:18:09.221 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.091+0000 7f91b486e640 1 -- 192.168.123.105:0/4106406019 wait complete. 2026-03-09T20:18:09.622 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIaab6ZwLVI101Eqfehv3Q++OzlE71QnVFqrltWWyoHB ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:09.622 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.344+0000 7f014f396640 1 Processor -- start 2026-03-09T20:18:09.622 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.344+0000 7f014f396640 1 -- start start 2026-03-09T20:18:09.622 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.344+0000 7f014f396640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f01481086c0 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.345+0000 7f014e394640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f0148104320 0x7f0148104720 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51478/0 (socket says 192.168.123.105:51478) 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.345+0000 7f014e394640 1 -- 192.168.123.105:0/298350505 learned_addr learned my addr 192.168.123.105:0/298350505 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.345+0000 7f014d392640 1 -- 192.168.123.105:0/298350505 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 549488469 0 0) 0x7f01481086c0 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.345+0000 7f014d392640 1 -- 192.168.123.105:0/298350505 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f012c003620 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.345+0000 7f014d392640 1 -- 192.168.123.105:0/298350505 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 594422194 0 0) 0x7f012c003620 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.345+0000 7f014d392640 1 -- 192.168.123.105:0/298350505 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f01481098a0 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.346+0000 7f014d392640 1 -- 192.168.123.105:0/298350505 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f013c002e10 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.346+0000 7f014d392640 1 -- 192.168.123.105:0/298350505 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f013c0034a0 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.346+0000 7f014f396640 1 -- 192.168.123.105:0/298350505 >> v1:192.168.123.105:6789/0 conn(0x7f0148104320 legacy=0x7f0148104720 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.346+0000 7f014f396640 1 -- 192.168.123.105:0/298350505 shutdown_connections 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.346+0000 7f014f396640 1 -- 192.168.123.105:0/298350505 >> 192.168.123.105:0/298350505 conn(0x7f01480fff40 msgr2=0x7f0148102360 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.346+0000 7f014f396640 1 -- 192.168.123.105:0/298350505 shutdown_connections 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.346+0000 7f014f396640 1 -- 192.168.123.105:0/298350505 wait complete. 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.347+0000 7f014f396640 1 Processor -- start 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.347+0000 7f014f396640 1 -- start start 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.347+0000 7f014f396640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f01481a2e50 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.347+0000 7f014e394640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f0148104320 0x7f01481a2740 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:51488/0 (socket says 192.168.123.105:51488) 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.347+0000 7f014e394640 1 -- 192.168.123.105:0/1411171583 learned_addr learned my addr 192.168.123.105:0/1411171583 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.347+0000 7f01377fe640 1 -- 192.168.123.105:0/1411171583 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3172409389 0 0) 0x7f01481a2e50 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.347+0000 7f01377fe640 1 -- 192.168.123.105:0/1411171583 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0124003620 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.347+0000 7f01377fe640 1 -- 192.168.123.105:0/1411171583 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3629962040 0 0) 0x7f0124003620 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.348+0000 7f01377fe640 1 -- 192.168.123.105:0/1411171583 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f01481a2e50 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.348+0000 7f01377fe640 1 -- 192.168.123.105:0/1411171583 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f013c0031f0 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.348+0000 7f01377fe640 1 -- 192.168.123.105:0/1411171583 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3696834847 0 0) 0x7f01481a2e50 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.348+0000 7f01377fe640 1 -- 192.168.123.105:0/1411171583 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f01481a3020 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.348+0000 7f014f396640 1 -- 192.168.123.105:0/1411171583 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f01481a3330 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.348+0000 7f014f396640 1 -- 192.168.123.105:0/1411171583 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f01481a6ec0 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.348+0000 7f01377fe640 1 -- 192.168.123.105:0/1411171583 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f013c0034a0 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.348+0000 7f01377fe640 1 -- 192.168.123.105:0/1411171583 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f013c005e60 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.349+0000 7f01377fe640 1 -- 192.168.123.105:0/1411171583 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 775757594 0 0) 0x7f013c012530 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.349+0000 7f014f396640 1 -- 192.168.123.105:0/1411171583 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f01481093d0 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.349+0000 7f01377fe640 1 -- 192.168.123.105:0/1411171583 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 2608398624 0 0) 0x7f013c04e1f0 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.352+0000 7f01377fe640 1 -- 192.168.123.105:0/1411171583 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f013c0188c0 con 0x7f0148104320 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.441+0000 7f014f396640 1 -- 192.168.123.105:0/1411171583 --> v1:192.168.123.105:6800/1901557444 -- mgr_command(tid 0: {"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}) -- 0x7f0148101a40 con 0x7f012403ec60 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.442+0000 7f01377fe640 1 -- 192.168.123.105:0/1411171583 <== mgr.14118 v1:192.168.123.105:6800/1901557444 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+123 (unknown 0 0 261029476) 0x7f0148101a40 con 0x7f012403ec60 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.444+0000 7f014f396640 1 -- 192.168.123.105:0/1411171583 >> v1:192.168.123.105:6800/1901557444 conn(0x7f012403ec60 legacy=0x7f0124041120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.444+0000 7f014f396640 1 -- 192.168.123.105:0/1411171583 >> v1:192.168.123.105:6789/0 conn(0x7f0148104320 legacy=0x7f01481a2740 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.444+0000 7f014f396640 1 -- 192.168.123.105:0/1411171583 shutdown_connections 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.444+0000 7f014f396640 1 -- 192.168.123.105:0/1411171583 >> 192.168.123.105:0/1411171583 conn(0x7f01480fff40 msgr2=0x7f0148100320 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.444+0000 7f014f396640 1 -- 192.168.123.105:0/1411171583 shutdown_connections 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.444+0000 7f014f396640 1 -- 192.168.123.105:0/1411171583 wait complete. 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:Adding key to root@localhost authorized_keys... 2026-03-09T20:18:09.623 INFO:teuthology.orchestra.run.vm05.stdout:Adding host vm05... 2026-03-09T20:18:10.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:10 vm05 ceph-mon[51870]: [09/Mar/2026:20:18:08] ENGINE Bus STARTING 2026-03-09T20:18:10.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:10 vm05 ceph-mon[51870]: [09/Mar/2026:20:18:08] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T20:18:10.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:10 vm05 ceph-mon[51870]: [09/Mar/2026:20:18:08] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T20:18:10.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:10 vm05 ceph-mon[51870]: [09/Mar/2026:20:18:08] ENGINE Bus STARTED 2026-03-09T20:18:10.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:10 vm05 ceph-mon[51870]: [09/Mar/2026:20:18:08] ENGINE Client ('192.168.123.105', 53636) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:18:10.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:10 vm05 ceph-mon[51870]: from='client.14132 v1:192.168.123.105:0/3869147543' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:10.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:10 vm05 ceph-mon[51870]: from='client.14134 v1:192.168.123.105:0/4106406019' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:10.291 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:10 vm05 ceph-mon[51870]: Generating ssh key... 2026-03-09T20:18:10.291 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:10 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:10.291 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:10 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:10.291 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:10 vm05 ceph-mon[51870]: mgrmap e8: y(active, since 2s) 2026-03-09T20:18:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:11 vm05 ceph-mon[51870]: from='client.14136 v1:192.168.123.105:0/1411171583' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:11 vm05 ceph-mon[51870]: from='client.14138 v1:192.168.123.105:0/4172123238' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "addr": "192.168.123.105", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:11.714 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Added host 'vm05' with addr '192.168.123.105' 2026-03-09T20:18:11.714 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.933+0000 7f0bc9e8f640 1 Processor -- start 2026-03-09T20:18:11.714 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.933+0000 7f0bc9e8f640 1 -- start start 2026-03-09T20:18:11.714 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.933+0000 7f0bc9e8f640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f0bc410cd80 con 0x7f0bc4108950 2026-03-09T20:18:11.714 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.933+0000 7f0bc37fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f0bc4108950 0x7f0bc4108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39168/0 (socket says 192.168.123.105:39168) 2026-03-09T20:18:11.714 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.933+0000 7f0bc37fe640 1 -- 192.168.123.105:0/96169904 learned_addr learned my addr 192.168.123.105:0/96169904 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:11.714 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.934+0000 7f0bc27fc640 1 -- 192.168.123.105:0/96169904 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3324156217 0 0) 0x7f0bc410cd80 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.934+0000 7f0bc27fc640 1 -- 192.168.123.105:0/96169904 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0ba0003620 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.934+0000 7f0bc27fc640 1 -- 192.168.123.105:0/96169904 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 554378775 0 0) 0x7f0ba0003620 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.934+0000 7f0bc27fc640 1 -- 192.168.123.105:0/96169904 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0bc410df60 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.934+0000 7f0bc27fc640 1 -- 192.168.123.105:0/96169904 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f0bb4002e10 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.934+0000 7f0bc27fc640 1 -- 192.168.123.105:0/96169904 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f0bb40034a0 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.934+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/96169904 >> v1:192.168.123.105:6789/0 conn(0x7f0bc4108950 legacy=0x7f0bc4108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.934+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/96169904 shutdown_connections 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.934+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/96169904 >> 192.168.123.105:0/96169904 conn(0x7f0bc407bdf0 msgr2=0x7f0bc407c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.935+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/96169904 shutdown_connections 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.935+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/96169904 wait complete. 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.935+0000 7f0bc9e8f640 1 Processor -- start 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.935+0000 7f0bc9e8f640 1 -- start start 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.935+0000 7f0bc9e8f640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f0bc419ec60 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.935+0000 7f0bc37fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f0bc4108950 0x7f0bc419e550 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39176/0 (socket says 192.168.123.105:39176) 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.935+0000 7f0bc37fe640 1 -- 192.168.123.105:0/4172123238 learned_addr learned my addr 192.168.123.105:0/4172123238 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.936+0000 7f0bc0ff9640 1 -- 192.168.123.105:0/4172123238 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4018812899 0 0) 0x7f0bc419ec60 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.936+0000 7f0bc0ff9640 1 -- 192.168.123.105:0/4172123238 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0b9c003620 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.936+0000 7f0bc0ff9640 1 -- 192.168.123.105:0/4172123238 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3932132064 0 0) 0x7f0b9c003620 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.936+0000 7f0bc0ff9640 1 -- 192.168.123.105:0/4172123238 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f0bc419ec60 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.936+0000 7f0bc0ff9640 1 -- 192.168.123.105:0/4172123238 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f0bb4003270 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.936+0000 7f0bc0ff9640 1 -- 192.168.123.105:0/4172123238 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2530081422 0 0) 0x7f0bc419ec60 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.936+0000 7f0bc0ff9640 1 -- 192.168.123.105:0/4172123238 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0bc419ee30 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.936+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/4172123238 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f0bc419f140 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.936+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/4172123238 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f0bc41a2c50 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.937+0000 7f0bc0ff9640 1 -- 192.168.123.105:0/4172123238 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f0bb40034d0 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.937+0000 7f0bc0ff9640 1 -- 192.168.123.105:0/4172123238 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f0bb4005d40 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.937+0000 7f0bc0ff9640 1 -- 192.168.123.105:0/4172123238 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 775757594 0 0) 0x7f0bb40123f0 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.937+0000 7f0bc0ff9640 1 -- 192.168.123.105:0/4172123238 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 2608398624 0 0) 0x7f0bb404e0b0 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.938+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/4172123238 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0b84005180 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:09.941+0000 7f0bc0ff9640 1 -- 192.168.123.105:0/4172123238 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f0bb4018900 con 0x7f0bc4108950 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:10.039+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/4172123238 --> v1:192.168.123.105:6800/1901557444 -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm05", "addr": "192.168.123.105", "target": ["mon-mgr", ""]}) -- 0x7f0b84002bf0 con 0x7f0b9c03ec60 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.545+0000 7f0bc0ff9640 1 -- 192.168.123.105:0/4172123238 <== mgr.14118 v1:192.168.123.105:6800/1901557444 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+46 (unknown 0 0 2505307444) 0x7f0b84002bf0 con 0x7f0b9c03ec60 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.549+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/4172123238 >> v1:192.168.123.105:6800/1901557444 conn(0x7f0b9c03ec60 legacy=0x7f0b9c041120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.549+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/4172123238 >> v1:192.168.123.105:6789/0 conn(0x7f0bc4108950 legacy=0x7f0bc419e550 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.549+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/4172123238 shutdown_connections 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.549+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/4172123238 >> 192.168.123.105:0/4172123238 conn(0x7f0bc407bdf0 msgr2=0x7f0bc41056f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.549+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/4172123238 shutdown_connections 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.549+0000 7f0bc9e8f640 1 -- 192.168.123.105:0/4172123238 wait complete. 2026-03-09T20:18:11.715 INFO:teuthology.orchestra.run.vm05.stdout:Deploying unmanaged mon service... 2026-03-09T20:18:12.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:12 vm05 ceph-mon[51870]: Deploying cephadm binary to vm05 2026-03-09T20:18:12.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:12 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:12.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:12 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:12.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:12 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:12.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.851+0000 7f4810dcf640 1 Processor -- start 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.852+0000 7f4810dcf640 1 -- start start 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.852+0000 7f4810dcf640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f480c10cd80 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.853+0000 7f480a575640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f480c108950 0x7f480c108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39190/0 (socket says 192.168.123.105:39190) 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.853+0000 7f480a575640 1 -- 192.168.123.105:0/2780109933 learned_addr learned my addr 192.168.123.105:0/2780109933 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.853+0000 7f4809573640 1 -- 192.168.123.105:0/2780109933 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4052992862 0 0) 0x7f480c10cd80 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.853+0000 7f4809573640 1 -- 192.168.123.105:0/2780109933 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f47ec003620 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.854+0000 7f4809573640 1 -- 192.168.123.105:0/2780109933 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 2713455405 0 0) 0x7f47ec003620 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.854+0000 7f4809573640 1 -- 192.168.123.105:0/2780109933 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f480c10df60 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.854+0000 7f4809573640 1 -- 192.168.123.105:0/2780109933 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f47f4002e10 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.854+0000 7f4809573640 1 -- 192.168.123.105:0/2780109933 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f47f40034a0 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.854+0000 7f4810dcf640 1 -- 192.168.123.105:0/2780109933 >> v1:192.168.123.105:6789/0 conn(0x7f480c108950 legacy=0x7f480c108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.854+0000 7f4810dcf640 1 -- 192.168.123.105:0/2780109933 shutdown_connections 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.854+0000 7f4810dcf640 1 -- 192.168.123.105:0/2780109933 >> 192.168.123.105:0/2780109933 conn(0x7f480c07bdf0 msgr2=0x7f480c07c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.855+0000 7f4810dcf640 1 -- 192.168.123.105:0/2780109933 shutdown_connections 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.855+0000 7f4810dcf640 1 -- 192.168.123.105:0/2780109933 wait complete. 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.855+0000 7f4810dcf640 1 Processor -- start 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.855+0000 7f4810dcf640 1 -- start start 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.855+0000 7f4810dcf640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f480c19ebc0 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.856+0000 7f480a575640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f480c108950 0x7f480c19e4b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39194/0 (socket says 192.168.123.105:39194) 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.856+0000 7f480a575640 1 -- 192.168.123.105:0/1373650697 learned_addr learned my addr 192.168.123.105:0/1373650697 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.856+0000 7f47fb7fe640 1 -- 192.168.123.105:0/1373650697 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3274937017 0 0) 0x7f480c19ebc0 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.856+0000 7f47fb7fe640 1 -- 192.168.123.105:0/1373650697 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f47e4003620 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.856+0000 7f47fb7fe640 1 -- 192.168.123.105:0/1373650697 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3762585986 0 0) 0x7f47e4003620 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.856+0000 7f47fb7fe640 1 -- 192.168.123.105:0/1373650697 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f480c19ebc0 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.856+0000 7f47fb7fe640 1 -- 192.168.123.105:0/1373650697 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f47f40031f0 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.856+0000 7f47fb7fe640 1 -- 192.168.123.105:0/1373650697 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 4107543867 0 0) 0x7f480c19ebc0 con 0x7f480c108950 2026-03-09T20:18:12.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.856+0000 7f47fb7fe640 1 -- 192.168.123.105:0/1373650697 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f480c19ed90 con 0x7f480c108950 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.856+0000 7f4810dcf640 1 -- 192.168.123.105:0/1373650697 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f480c19f0a0 con 0x7f480c108950 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.857+0000 7f4810dcf640 1 -- 192.168.123.105:0/1373650697 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f480c1a2bb0 con 0x7f480c108950 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.857+0000 7f47fb7fe640 1 -- 192.168.123.105:0/1373650697 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f47f40034a0 con 0x7f480c108950 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.857+0000 7f47fb7fe640 1 -- 192.168.123.105:0/1373650697 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f47f4005e80 con 0x7f480c108950 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.857+0000 7f47fb7fe640 1 -- 192.168.123.105:0/1373650697 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 775757594 0 0) 0x7f47f4012530 con 0x7f480c108950 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.857+0000 7f4810dcf640 1 -- 192.168.123.105:0/1373650697 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f480c10db20 con 0x7f480c108950 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.858+0000 7f47fb7fe640 1 -- 192.168.123.105:0/1373650697 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 2608398624 0 0) 0x7f47f404e2f0 con 0x7f480c108950 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.861+0000 7f47fb7fe640 1 -- 192.168.123.105:0/1373650697 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f47f4018a40 con 0x7f480c108950 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.969+0000 7f4810dcf640 1 -- 192.168.123.105:0/1373650697 --> v1:192.168.123.105:6800/1901557444 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}) -- 0x7f480c106fe0 con 0x7f47e403ecb0 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.974+0000 7f47fb7fe640 1 -- 192.168.123.105:0/1373650697 <== mgr.14118 v1:192.168.123.105:6800/1901557444 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (unknown 0 0 3265049985) 0x7f480c106fe0 con 0x7f47e403ecb0 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.976+0000 7f4810dcf640 1 -- 192.168.123.105:0/1373650697 >> v1:192.168.123.105:6800/1901557444 conn(0x7f47e403ecb0 legacy=0x7f47e4041170 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.976+0000 7f4810dcf640 1 -- 192.168.123.105:0/1373650697 >> v1:192.168.123.105:6789/0 conn(0x7f480c108950 legacy=0x7f480c19e4b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.976+0000 7f4810dcf640 1 -- 192.168.123.105:0/1373650697 shutdown_connections 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.976+0000 7f4810dcf640 1 -- 192.168.123.105:0/1373650697 >> 192.168.123.105:0/1373650697 conn(0x7f480c07bdf0 msgr2=0x7f480c105790 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.976+0000 7f4810dcf640 1 -- 192.168.123.105:0/1373650697 shutdown_connections 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:11.976+0000 7f4810dcf640 1 -- 192.168.123.105:0/1373650697 wait complete. 2026-03-09T20:18:12.109 INFO:teuthology.orchestra.run.vm05.stdout:Deploying unmanaged mgr service... 2026-03-09T20:18:12.506 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.229+0000 7fe1e5700640 1 Processor -- start 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.229+0000 7fe1e5700640 1 -- start start 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.230+0000 7fe1e5700640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe1e010cd80 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.230+0000 7fe1deffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fe1e0108950 0x7fe1e0108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39200/0 (socket says 192.168.123.105:39200) 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.230+0000 7fe1deffd640 1 -- 192.168.123.105:0/1834736022 learned_addr learned my addr 192.168.123.105:0/1834736022 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.230+0000 7fe1ddffb640 1 -- 192.168.123.105:0/1834736022 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 124821142 0 0) 0x7fe1e010cd80 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.230+0000 7fe1ddffb640 1 -- 192.168.123.105:0/1834736022 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe1cc003620 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.230+0000 7fe1ddffb640 1 -- 192.168.123.105:0/1834736022 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3698727304 0 0) 0x7fe1cc003620 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.230+0000 7fe1ddffb640 1 -- 192.168.123.105:0/1834736022 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe1e010df60 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.230+0000 7fe1ddffb640 1 -- 192.168.123.105:0/1834736022 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fe1c8002e10 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.231+0000 7fe1ddffb640 1 -- 192.168.123.105:0/1834736022 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fe1c80033e0 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.231+0000 7fe1ddffb640 1 -- 192.168.123.105:0/1834736022 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fe1c8005780 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.231+0000 7fe1e5700640 1 -- 192.168.123.105:0/1834736022 >> v1:192.168.123.105:6789/0 conn(0x7fe1e0108950 legacy=0x7fe1e0108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.231+0000 7fe1e5700640 1 -- 192.168.123.105:0/1834736022 shutdown_connections 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.231+0000 7fe1e5700640 1 -- 192.168.123.105:0/1834736022 >> 192.168.123.105:0/1834736022 conn(0x7fe1e007bdf0 msgr2=0x7fe1e007c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.231+0000 7fe1e5700640 1 -- 192.168.123.105:0/1834736022 shutdown_connections 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.231+0000 7fe1e5700640 1 -- 192.168.123.105:0/1834736022 wait complete. 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.232+0000 7fe1e5700640 1 Processor -- start 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.232+0000 7fe1e5700640 1 -- start start 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.232+0000 7fe1e5700640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe1e019ebc0 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.232+0000 7fe1deffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fe1e0108950 0x7fe1e019e4b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39206/0 (socket says 192.168.123.105:39206) 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.232+0000 7fe1deffd640 1 -- 192.168.123.105:0/1148759715 learned_addr learned my addr 192.168.123.105:0/1148759715 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.233+0000 7fe1bffff640 1 -- 192.168.123.105:0/1148759715 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1731117507 0 0) 0x7fe1e019ebc0 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.233+0000 7fe1bffff640 1 -- 192.168.123.105:0/1148759715 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe1b4003620 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.233+0000 7fe1bffff640 1 -- 192.168.123.105:0/1148759715 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 11890806 0 0) 0x7fe1b4003620 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.233+0000 7fe1bffff640 1 -- 192.168.123.105:0/1148759715 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fe1e019ebc0 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.233+0000 7fe1bffff640 1 -- 192.168.123.105:0/1148759715 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fe1c8002890 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.233+0000 7fe1bffff640 1 -- 192.168.123.105:0/1148759715 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3524124031 0 0) 0x7fe1e019ebc0 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.233+0000 7fe1bffff640 1 -- 192.168.123.105:0/1148759715 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe1e019ed90 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.233+0000 7fe1e5700640 1 -- 192.168.123.105:0/1148759715 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fe1e019f0a0 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.233+0000 7fe1e5700640 1 -- 192.168.123.105:0/1148759715 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fe1e01a2c30 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.234+0000 7fe1bffff640 1 -- 192.168.123.105:0/1148759715 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fe1c8004bd0 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.234+0000 7fe1bffff640 1 -- 192.168.123.105:0/1148759715 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fe1c8005ec0 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.236+0000 7fe1bffff640 1 -- 192.168.123.105:0/1148759715 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 775757594 0 0) 0x7fe1c8007190 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.236+0000 7fe1bffff640 1 -- 192.168.123.105:0/1148759715 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 2608398624 0 0) 0x7fe1c804e1e0 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.237+0000 7fe1e5700640 1 -- 192.168.123.105:0/1148759715 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe1a4005180 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.240+0000 7fe1bffff640 1 -- 192.168.123.105:0/1148759715 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fe1c8018a10 con 0x7fe1e0108950 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.346+0000 7fe1e5700640 1 -- 192.168.123.105:0/1148759715 --> v1:192.168.123.105:6800/1901557444 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}) -- 0x7fe1a4002bf0 con 0x7fe1b403eb40 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.352+0000 7fe1bffff640 1 -- 192.168.123.105:0/1148759715 <== mgr.14118 v1:192.168.123.105:6800/1901557444 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (unknown 0 0 325935098) 0x7fe1a4002bf0 con 0x7fe1b403eb40 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.355+0000 7fe1e5700640 1 -- 192.168.123.105:0/1148759715 >> v1:192.168.123.105:6800/1901557444 conn(0x7fe1b403eb40 legacy=0x7fe1b4041000 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.355+0000 7fe1e5700640 1 -- 192.168.123.105:0/1148759715 >> v1:192.168.123.105:6789/0 conn(0x7fe1e0108950 legacy=0x7fe1e019e4b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.356+0000 7fe1e5700640 1 -- 192.168.123.105:0/1148759715 shutdown_connections 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.356+0000 7fe1e5700640 1 -- 192.168.123.105:0/1148759715 >> 192.168.123.105:0/1148759715 conn(0x7fe1e007bdf0 msgr2=0x7fe1e01057f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.356+0000 7fe1e5700640 1 -- 192.168.123.105:0/1148759715 shutdown_connections 2026-03-09T20:18:12.507 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.356+0000 7fe1e5700640 1 -- 192.168.123.105:0/1148759715 wait complete. 2026-03-09T20:18:12.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.633+0000 7f41fec03640 1 Processor -- start 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.634+0000 7f41fec03640 1 -- start start 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.634+0000 7f41fec03640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f41f810cd80 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.634+0000 7f41fc978640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f41f8108950 0x7f41f8108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39210/0 (socket says 192.168.123.105:39210) 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.634+0000 7f41fc978640 1 -- 192.168.123.105:0/2160589713 learned_addr learned my addr 192.168.123.105:0/2160589713 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.634+0000 7f41ef7fe640 1 -- 192.168.123.105:0/2160589713 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3687987312 0 0) 0x7f41f810cd80 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.634+0000 7f41ef7fe640 1 -- 192.168.123.105:0/2160589713 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f41d8003620 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.635+0000 7f41ef7fe640 1 -- 192.168.123.105:0/2160589713 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 1207983107 0 0) 0x7f41d8003620 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.635+0000 7f41ef7fe640 1 -- 192.168.123.105:0/2160589713 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f41f810df60 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.635+0000 7f41ef7fe640 1 -- 192.168.123.105:0/2160589713 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f41e0002e10 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.635+0000 7f41ef7fe640 1 -- 192.168.123.105:0/2160589713 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f41e00034e0 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.635+0000 7f41fec03640 1 -- 192.168.123.105:0/2160589713 >> v1:192.168.123.105:6789/0 conn(0x7f41f8108950 legacy=0x7f41f8108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.635+0000 7f41fec03640 1 -- 192.168.123.105:0/2160589713 shutdown_connections 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.635+0000 7f41fec03640 1 -- 192.168.123.105:0/2160589713 >> 192.168.123.105:0/2160589713 conn(0x7f41f807bdf0 msgr2=0x7f41f807c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.635+0000 7f41fec03640 1 -- 192.168.123.105:0/2160589713 shutdown_connections 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.635+0000 7f41fec03640 1 -- 192.168.123.105:0/2160589713 wait complete. 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.636+0000 7f41fec03640 1 Processor -- start 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.636+0000 7f41fec03640 1 -- start start 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.636+0000 7f41fec03640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f41f819ec80 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.636+0000 7f41fc978640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f41f8108950 0x7f41f819e570 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39218/0 (socket says 192.168.123.105:39218) 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.636+0000 7f41fc978640 1 -- 192.168.123.105:0/1096563322 learned_addr learned my addr 192.168.123.105:0/1096563322 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.636+0000 7f41edffb640 1 -- 192.168.123.105:0/1096563322 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 995561822 0 0) 0x7f41f819ec80 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.636+0000 7f41edffb640 1 -- 192.168.123.105:0/1096563322 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f41c0003620 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.637+0000 7f41edffb640 1 -- 192.168.123.105:0/1096563322 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4006549937 0 0) 0x7f41c0003620 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.637+0000 7f41edffb640 1 -- 192.168.123.105:0/1096563322 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f41f819ec80 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.637+0000 7f41edffb640 1 -- 192.168.123.105:0/1096563322 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f41e0004f90 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.637+0000 7f41edffb640 1 -- 192.168.123.105:0/1096563322 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1492845264 0 0) 0x7f41f819ec80 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.637+0000 7f41edffb640 1 -- 192.168.123.105:0/1096563322 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f41f819ee50 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.637+0000 7f41fec03640 1 -- 192.168.123.105:0/1096563322 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f41f819f160 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.637+0000 7f41fec03640 1 -- 192.168.123.105:0/1096563322 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f41f81a2c70 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.637+0000 7f41edffb640 1 -- 192.168.123.105:0/1096563322 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f41e0002eb0 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.637+0000 7f41edffb640 1 -- 192.168.123.105:0/1096563322 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f41e0005e70 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.638+0000 7f41edffb640 1 -- 192.168.123.105:0/1096563322 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 775757594 0 0) 0x7f41e00124e0 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.638+0000 7f41edffb640 1 -- 192.168.123.105:0/1096563322 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 2608398624 0 0) 0x7f41e004e2d0 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.639+0000 7f41fec03640 1 -- 192.168.123.105:0/1096563322 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f41b8005180 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.642+0000 7f41edffb640 1 -- 192.168.123.105:0/1096563322 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f41e00189a0 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.738+0000 7f41fec03640 1 -- 192.168.123.105:0/1096563322 --> v1:192.168.123.105:6789/0 -- mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) -- 0x7f41b8005470 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.741+0000 7f41edffb640 1 -- 192.168.123.105:0/1096563322 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/container_init}]=0 v6)=0 v6) ==== 142+0+0 (unknown 1123546310 0 0) 0x7f41e00182a0 con 0x7f41f8108950 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.748+0000 7f41fec03640 1 -- 192.168.123.105:0/1096563322 >> v1:192.168.123.105:6800/1901557444 conn(0x7f41c003eb30 legacy=0x7f41c0040ff0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.748+0000 7f41fec03640 1 -- 192.168.123.105:0/1096563322 >> v1:192.168.123.105:6789/0 conn(0x7f41f8108950 legacy=0x7f41f819e570 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.749+0000 7f41fec03640 1 -- 192.168.123.105:0/1096563322 shutdown_connections 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.749+0000 7f41fec03640 1 -- 192.168.123.105:0/1096563322 >> 192.168.123.105:0/1096563322 conn(0x7f41f807bdf0 msgr2=0x7f41f8106050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.749+0000 7f41fec03640 1 -- 192.168.123.105:0/1096563322 shutdown_connections 2026-03-09T20:18:12.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:12.749+0000 7f41fec03640 1 -- 192.168.123.105:0/1096563322 wait complete. 2026-03-09T20:18:13.131 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:13 vm05 ceph-mon[51870]: Added host vm05 2026-03-09T20:18:13.131 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:13 vm05 ceph-mon[51870]: from='client.14140 v1:192.168.123.105:0/1373650697' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:13.131 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:13 vm05 ceph-mon[51870]: Saving service mon spec with placement count:5 2026-03-09T20:18:13.131 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:13 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:13.131 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1096563322' entity='client.admin' 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.020+0000 7f2ab9bb7640 1 Processor -- start 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.021+0000 7f2ab9bb7640 1 -- start start 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.021+0000 7f2ab9bb7640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f2ab410cd80 con 0x7f2ab4108950 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.021+0000 7f2ab37fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f2ab4108950 0x7f2ab4108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39224/0 (socket says 192.168.123.105:39224) 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.021+0000 7f2ab37fe640 1 -- 192.168.123.105:0/502791923 learned_addr learned my addr 192.168.123.105:0/502791923 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.022+0000 7f2ab27fc640 1 -- 192.168.123.105:0/502791923 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3601513407 0 0) 0x7f2ab410cd80 con 0x7f2ab4108950 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.022+0000 7f2ab27fc640 1 -- 192.168.123.105:0/502791923 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2a98003620 con 0x7f2ab4108950 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.023+0000 7f2ab27fc640 1 -- 192.168.123.105:0/502791923 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 774346502 0 0) 0x7f2a98003620 con 0x7f2ab4108950 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.023+0000 7f2ab27fc640 1 -- 192.168.123.105:0/502791923 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2ab410df60 con 0x7f2ab4108950 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.023+0000 7f2ab27fc640 1 -- 192.168.123.105:0/502791923 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f2aa0002e10 con 0x7f2ab4108950 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.028+0000 7f2ab27fc640 1 -- 192.168.123.105:0/502791923 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f2aa00033e0 con 0x7f2ab4108950 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.028+0000 7f2ab27fc640 1 -- 192.168.123.105:0/502791923 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f2aa0005780 con 0x7f2ab4108950 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.029+0000 7f2ab9bb7640 1 -- 192.168.123.105:0/502791923 >> v1:192.168.123.105:6789/0 conn(0x7f2ab4108950 legacy=0x7f2ab4108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.030+0000 7f2ab9bb7640 1 -- 192.168.123.105:0/502791923 shutdown_connections 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.030+0000 7f2ab9bb7640 1 -- 192.168.123.105:0/502791923 >> 192.168.123.105:0/502791923 conn(0x7f2ab407bdf0 msgr2=0x7f2ab407c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.030+0000 7f2ab9bb7640 1 -- 192.168.123.105:0/502791923 shutdown_connections 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.030+0000 7f2ab9bb7640 1 -- 192.168.123.105:0/502791923 wait complete. 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.031+0000 7f2ab9bb7640 1 Processor -- start 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.031+0000 7f2ab9bb7640 1 -- start start 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.031+0000 7f2ab9bb7640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f2ab419d470 con 0x7f2ab4108950 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.034+0000 7f2ab37fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f2ab4108950 0x7f2ab4199fb0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39230/0 (socket says 192.168.123.105:39230) 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.034+0000 7f2ab37fe640 1 -- 192.168.123.105:0/2619650912 learned_addr learned my addr 192.168.123.105:0/2619650912 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.034+0000 7f2ab0ff9640 1 -- 192.168.123.105:0/2619650912 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3166246121 0 0) 0x7f2ab419d470 con 0x7f2ab4108950 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.034+0000 7f2ab0ff9640 1 -- 192.168.123.105:0/2619650912 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2a7c003620 con 0x7f2ab4108950 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.035+0000 7f2ab0ff9640 1 -- 192.168.123.105:0/2619650912 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3713490455 0 0) 0x7f2a7c003620 con 0x7f2ab4108950 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.035+0000 7f2ab0ff9640 1 -- 192.168.123.105:0/2619650912 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f2ab419d470 con 0x7f2ab4108950 2026-03-09T20:18:13.313 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.035+0000 7f2ab0ff9640 1 -- 192.168.123.105:0/2619650912 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f2aa0002890 con 0x7f2ab4108950 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.035+0000 7f2ab0ff9640 1 -- 192.168.123.105:0/2619650912 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 331402019 0 0) 0x7f2ab419d470 con 0x7f2ab4108950 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.036+0000 7f2ab0ff9640 1 -- 192.168.123.105:0/2619650912 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2ab419d640 con 0x7f2ab4108950 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.036+0000 7f2ab9bb7640 1 -- 192.168.123.105:0/2619650912 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f2ab419a6c0 con 0x7f2ab4108950 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.036+0000 7f2ab9bb7640 1 -- 192.168.123.105:0/2619650912 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f2ab41aff30 con 0x7f2ab4108950 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.036+0000 7f2ab0ff9640 1 -- 192.168.123.105:0/2619650912 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f2aa0004bd0 con 0x7f2ab4108950 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.038+0000 7f2ab0ff9640 1 -- 192.168.123.105:0/2619650912 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f2aa00061a0 con 0x7f2ab4108950 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.038+0000 7f2ab9bb7640 1 -- 192.168.123.105:0/2619650912 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2ab410d9d0 con 0x7f2ab4108950 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.038+0000 7f2ab0ff9640 1 -- 192.168.123.105:0/2619650912 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 775757594 0 0) 0x7f2aa0007470 con 0x7f2ab4108950 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.041+0000 7f2ab0ff9640 1 -- 192.168.123.105:0/2619650912 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 2608398624 0 0) 0x7f2aa004e1b0 con 0x7f2ab4108950 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.041+0000 7f2ab0ff9640 1 -- 192.168.123.105:0/2619650912 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f2aa0002bc0 con 0x7f2ab4108950 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.155+0000 7f2ab9bb7640 1 -- 192.168.123.105:0/2619650912 --> v1:192.168.123.105:6789/0 -- mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0) -- 0x7f2ab410dc60 con 0x7f2ab4108950 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.158+0000 7f2ab0ff9640 1 -- 192.168.123.105:0/2619650912 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{prefix=config set, name=mgr/dashboard/ssl_server_port}]=0 v7)=0 v7) ==== 130+0+0 (unknown 1336629364 0 0) 0x7f2aa0018900 con 0x7f2ab4108950 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.165+0000 7f2a967fc640 1 -- 192.168.123.105:0/2619650912 >> v1:192.168.123.105:6800/1901557444 conn(0x7f2a7c03eb70 legacy=0x7f2a7c041030 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.165+0000 7f2a967fc640 1 -- 192.168.123.105:0/2619650912 >> v1:192.168.123.105:6789/0 conn(0x7f2ab4108950 legacy=0x7f2ab4199fb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.165+0000 7f2a967fc640 1 -- 192.168.123.105:0/2619650912 shutdown_connections 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.166+0000 7f2a967fc640 1 -- 192.168.123.105:0/2619650912 >> 192.168.123.105:0/2619650912 conn(0x7f2ab407bdf0 msgr2=0x7f2ab4105810 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.166+0000 7f2a967fc640 1 -- 192.168.123.105:0/2619650912 shutdown_connections 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.166+0000 7f2a967fc640 1 -- 192.168.123.105:0/2619650912 wait complete. 2026-03-09T20:18:13.314 INFO:teuthology.orchestra.run.vm05.stdout:Enabling the dashboard module... 2026-03-09T20:18:14.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:14 vm05 ceph-mon[51870]: from='client.14142 v1:192.168.123.105:0/1148759715' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:14.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:14 vm05 ceph-mon[51870]: Saving service mgr spec with placement count:2 2026-03-09T20:18:14.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:14 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2619650912' entity='client.admin' 2026-03-09T20:18:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:14 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:14 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:14 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3946134565' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T20:18:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:14 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:14.631 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.456+0000 7fd01d386640 1 Processor -- start 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.456+0000 7fd01d386640 1 -- start start 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.456+0000 7fd01d386640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd018074810 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.457+0000 7fd016ffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fd018073c70 0x7fd018074070 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39238/0 (socket says 192.168.123.105:39238) 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.457+0000 7fd016ffd640 1 -- 192.168.123.105:0/2614837477 learned_addr learned my addr 192.168.123.105:0/2614837477 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.457+0000 7fd015ffb640 1 -- 192.168.123.105:0/2614837477 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 649163858 0 0) 0x7fd018074810 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.457+0000 7fd015ffb640 1 -- 192.168.123.105:0/2614837477 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd004003620 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.457+0000 7fd015ffb640 1 -- 192.168.123.105:0/2614837477 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 4171247231 0 0) 0x7fd004003620 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.457+0000 7fd015ffb640 1 -- 192.168.123.105:0/2614837477 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd0181121f0 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.457+0000 7fd015ffb640 1 -- 192.168.123.105:0/2614837477 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fd008002e10 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.458+0000 7fd015ffb640 1 -- 192.168.123.105:0/2614837477 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fd008003400 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.458+0000 7fd015ffb640 1 -- 192.168.123.105:0/2614837477 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fd0080059d0 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.458+0000 7fd01d386640 1 -- 192.168.123.105:0/2614837477 >> v1:192.168.123.105:6789/0 conn(0x7fd018073c70 legacy=0x7fd018074070 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.458+0000 7fd01d386640 1 -- 192.168.123.105:0/2614837477 shutdown_connections 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.458+0000 7fd01d386640 1 -- 192.168.123.105:0/2614837477 >> 192.168.123.105:0/2614837477 conn(0x7fd01806f550 msgr2=0x7fd018071970 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.458+0000 7fd01d386640 1 -- 192.168.123.105:0/2614837477 shutdown_connections 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.458+0000 7fd01d386640 1 -- 192.168.123.105:0/2614837477 wait complete. 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.458+0000 7fd01d386640 1 Processor -- start 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.458+0000 7fd01d386640 1 -- start start 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.459+0000 7fd01d386640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd01811e070 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.459+0000 7fd016ffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fd018073c70 0x7fd01811bbb0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39252/0 (socket says 192.168.123.105:39252) 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.459+0000 7fd016ffd640 1 -- 192.168.123.105:0/3946134565 learned_addr learned my addr 192.168.123.105:0/3946134565 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.459+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1924920232 0 0) 0x7fd01811e070 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.459+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fcfe8003620 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.459+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4129583039 0 0) 0x7fcfe8003620 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.459+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd01811e070 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.459+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fd008002890 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.460+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2659428285 0 0) 0x7fd01811e070 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.460+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd01811e240 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.460+0000 7fd01d386640 1 -- 192.168.123.105:0/3946134565 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fd01811c2c0 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.460+0000 7fd01d386640 1 -- 192.168.123.105:0/3946134565 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fd0181b8530 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.461+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fd0080055f0 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.461+0000 7fd01d386640 1 -- 192.168.123.105:0/3946134565 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd018111e50 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.461+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fd008005f50 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.461+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 775757594 0 0) 0x7fd008012620 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.461+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 2608398624 0 0) 0x7fd00804e390 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.464+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fd008018ae0 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:13.585+0000 7fd01d386640 1 -- 192.168.123.105:0/3946134565 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0) -- 0x7fd0181b8bc0 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.490+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "dashboard"}]=0 v9) ==== 88+0+0 (unknown 1498667528 0 0) 0x7fd0080183e0 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.490+0000 7fcff7fff640 1 -- 192.168.123.105:0/3946134565 <== mon.0 v1:192.168.123.105:6789/0 11 ==== mgrmap(e 9) ==== 50225+0+0 (unknown 3260243651 0 0) 0x7fd00804cd50 con 0x7fd018073c70 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.498+0000 7fcff5ffb640 1 -- 192.168.123.105:0/3946134565 >> v1:192.168.123.105:6800/1901557444 conn(0x7fcfe803ec40 legacy=0x7fcfe8041100 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.498+0000 7fcff5ffb640 1 -- 192.168.123.105:0/3946134565 >> v1:192.168.123.105:6789/0 conn(0x7fd018073c70 legacy=0x7fd01811bbb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.498+0000 7fcff5ffb640 1 -- 192.168.123.105:0/3946134565 shutdown_connections 2026-03-09T20:18:14.632 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.498+0000 7fcff5ffb640 1 -- 192.168.123.105:0/3946134565 >> 192.168.123.105:0/3946134565 conn(0x7fd01806f550 msgr2=0x7fd01810ea30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:14.633 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.498+0000 7fcff5ffb640 1 -- 192.168.123.105:0/3946134565 shutdown_connections 2026-03-09T20:18:14.633 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.498+0000 7fcff5ffb640 1 -- 192.168.123.105:0/3946134565 wait complete. 2026-03-09T20:18:14.663 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:14 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ignoring --setuser ceph since I am not root 2026-03-09T20:18:14.663 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:14 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ignoring --setgroup ceph since I am not root 2026-03-09T20:18:14.663 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:14 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:14.646+0000 7fcaeb671140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T20:18:14.940 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:14 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:14.695+0000 7fcaeb671140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.812+0000 7fb5490e4640 1 Processor -- start 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.813+0000 7fb5490e4640 1 -- start start 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.813+0000 7fb5490e4640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb544111530 con 0x7fb544074160 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.813+0000 7fb543fff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fb544074160 0x7fb544074560 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39282/0 (socket says 192.168.123.105:39282) 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.813+0000 7fb543fff640 1 -- 192.168.123.105:0/1664209839 learned_addr learned my addr 192.168.123.105:0/1664209839 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.814+0000 7fb5437fe640 1 -- 192.168.123.105:0/1664209839 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 784192943 0 0) 0x7fb544111530 con 0x7fb544074160 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.814+0000 7fb5437fe640 1 -- 192.168.123.105:0/1664209839 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb51c003620 con 0x7fb544074160 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.814+0000 7fb5437fe640 1 -- 192.168.123.105:0/1664209839 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3346346947 0 0) 0x7fb51c003620 con 0x7fb544074160 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.814+0000 7fb5437fe640 1 -- 192.168.123.105:0/1664209839 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb544112710 con 0x7fb544074160 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.814+0000 7fb5437fe640 1 -- 192.168.123.105:0/1664209839 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fb534002e10 con 0x7fb544074160 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.814+0000 7fb5437fe640 1 -- 192.168.123.105:0/1664209839 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fb534003400 con 0x7fb544074160 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.815+0000 7fb5490e4640 1 -- 192.168.123.105:0/1664209839 >> v1:192.168.123.105:6789/0 conn(0x7fb544074160 legacy=0x7fb544074560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.816+0000 7fb5490e4640 1 -- 192.168.123.105:0/1664209839 shutdown_connections 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.816+0000 7fb5490e4640 1 -- 192.168.123.105:0/1664209839 >> 192.168.123.105:0/1664209839 conn(0x7fb54406f4e0 msgr2=0x7fb544071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.816+0000 7fb5490e4640 1 -- 192.168.123.105:0/1664209839 shutdown_connections 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.816+0000 7fb5490e4640 1 -- 192.168.123.105:0/1664209839 wait complete. 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.816+0000 7fb5490e4640 1 Processor -- start 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.816+0000 7fb5490e4640 1 -- start start 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.816+0000 7fb5490e4640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb5441a3050 con 0x7fb544074160 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.816+0000 7fb543fff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fb544074160 0x7fb5441a2940 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39296/0 (socket says 192.168.123.105:39296) 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.816+0000 7fb543fff640 1 -- 192.168.123.105:0/41370987 learned_addr learned my addr 192.168.123.105:0/41370987 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.817+0000 7fb541ffb640 1 -- 192.168.123.105:0/41370987 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1449729868 0 0) 0x7fb5441a3050 con 0x7fb544074160 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.817+0000 7fb541ffb640 1 -- 192.168.123.105:0/41370987 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb520003620 con 0x7fb544074160 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.817+0000 7fb541ffb640 1 -- 192.168.123.105:0/41370987 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4153403179 0 0) 0x7fb520003620 con 0x7fb544074160 2026-03-09T20:18:15.074 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.817+0000 7fb541ffb640 1 -- 192.168.123.105:0/41370987 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fb5441a3050 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.817+0000 7fb541ffb640 1 -- 192.168.123.105:0/41370987 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fb534004e90 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.817+0000 7fb541ffb640 1 -- 192.168.123.105:0/41370987 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1552443297 0 0) 0x7fb5441a3050 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.817+0000 7fb541ffb640 1 -- 192.168.123.105:0/41370987 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb5441a3220 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.817+0000 7fb5490e4640 1 -- 192.168.123.105:0/41370987 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fb5441a34d0 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.817+0000 7fb5490e4640 1 -- 192.168.123.105:0/41370987 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fb5441a70c0 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.818+0000 7fb541ffb640 1 -- 192.168.123.105:0/41370987 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fb5340032c0 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.818+0000 7fb541ffb640 1 -- 192.168.123.105:0/41370987 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fb534005a30 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.819+0000 7fb541ffb640 1 -- 192.168.123.105:0/41370987 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 9) ==== 50225+0+0 (unknown 3260243651 0 0) 0x7fb5340126c0 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.819+0000 7fb53bfff640 1 -- 192.168.123.105:0/41370987 >> v1:192.168.123.105:6800/1901557444 conn(0x7fb52003ec80 legacy=0x7fb520041140 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/1901557444 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.819+0000 7fb541ffb640 1 -- 192.168.123.105:0/41370987 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 2608398624 0 0) 0x7fb53404e0e0 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.820+0000 7fb5490e4640 1 -- 192.168.123.105:0/41370987 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb504005180 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.826+0000 7fb541ffb640 1 -- 192.168.123.105:0/41370987 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fb5340187b0 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.940+0000 7fb5490e4640 1 -- 192.168.123.105:0/41370987 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mgr stat"} v 0) -- 0x7fb504005c80 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.943+0000 7fb541ffb640 1 -- 192.168.123.105:0/41370987 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v9) ==== 56+0+88 (unknown 2005831594 0 1748205903) 0x7fb5340180b0 con 0x7fb544074160 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.946+0000 7fb53affd640 1 -- 192.168.123.105:0/41370987 >> v1:192.168.123.105:6800/1901557444 conn(0x7fb52003ec80 legacy=0x7fb520041140 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.946+0000 7fb53affd640 1 -- 192.168.123.105:0/41370987 >> v1:192.168.123.105:6789/0 conn(0x7fb544074160 legacy=0x7fb5441a2940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.946+0000 7fb53affd640 1 -- 192.168.123.105:0/41370987 shutdown_connections 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.946+0000 7fb53affd640 1 -- 192.168.123.105:0/41370987 >> 192.168.123.105:0/41370987 conn(0x7fb54406f4e0 msgr2=0x7fb544071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.946+0000 7fb53affd640 1 -- 192.168.123.105:0/41370987 shutdown_connections 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:14.946+0000 7fb53affd640 1 -- 192.168.123.105:0/41370987 wait complete. 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for the mgr to restart... 2026-03-09T20:18:15.075 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr epoch 9... 2026-03-09T20:18:15.191 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:15 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:15.124+0000 7fcaeb671140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T20:18:15.441 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:15 vm05 ceph-mon[51870]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:15.441 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3946134565' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T20:18:15.441 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:15 vm05 ceph-mon[51870]: mgrmap e9: y(active, since 8s) 2026-03-09T20:18:15.441 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/41370987' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:18:15.442 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:15 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:15.432+0000 7fcaeb671140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T20:18:15.842 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:15 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T20:18:15.842 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:15 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T20:18:15.842 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:15 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: from numpy import show_config as show_numpy_config 2026-03-09T20:18:15.842 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:15 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:15.513+0000 7fcaeb671140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T20:18:15.842 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:15 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:15.547+0000 7fcaeb671140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T20:18:15.842 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:15 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:15.615+0000 7fcaeb671140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T20:18:16.159 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:16 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:16.066+0000 7fcaeb671140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T20:18:16.469 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:16 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:16.168+0000 7fcaeb671140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:18:16.469 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:16 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:16.204+0000 7fcaeb671140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T20:18:16.469 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:16 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:16.236+0000 7fcaeb671140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T20:18:16.469 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:16 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:16.275+0000 7fcaeb671140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T20:18:16.469 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:16 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:16.310+0000 7fcaeb671140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T20:18:16.469 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:16 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:16.469+0000 7fcaeb671140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T20:18:16.724 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:16 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:16.517+0000 7fcaeb671140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T20:18:16.975 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:16 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:16.724+0000 7fcaeb671140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T20:18:17.321 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:16 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:16.975+0000 7fcaeb671140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T20:18:17.321 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:17 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:17.010+0000 7fcaeb671140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T20:18:17.321 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:17 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:17.049+0000 7fcaeb671140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T20:18:17.321 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:17 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:17.118+0000 7fcaeb671140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T20:18:17.321 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:17 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:17.151+0000 7fcaeb671140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T20:18:17.321 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:17 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:17.221+0000 7fcaeb671140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:17 vm05 ceph-mon[51870]: Active manager daemon y restarted 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:17 vm05 ceph-mon[51870]: Activating manager daemon y 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:17 vm05 ceph-mon[51870]: osdmap e3: 0 total, 0 up, 0 in 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:17 vm05 ceph-mon[51870]: mgrmap e10: y(active, starting, since 0.00611163s) 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:17 vm05 ceph-mon[51870]: Manager daemon y is now available 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:17 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:17.321+0000 7fcaeb671140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:17 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:17.443+0000 7fcaeb671140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T20:18:17.661 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:17 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:17.476+0000 7fcaeb671140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.227+0000 7f3bef202640 1 Processor -- start 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.227+0000 7f3bef202640 1 -- start start 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.227+0000 7f3bef202640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3be8111530 con 0x7f3be8074160 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.227+0000 7f3bee200640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f3be8074160 0x7f3be8074560 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39306/0 (socket says 192.168.123.105:39306) 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.227+0000 7f3bee200640 1 -- 192.168.123.105:0/2063646309 learned_addr learned my addr 192.168.123.105:0/2063646309 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.228+0000 7f3bed1fe640 1 -- 192.168.123.105:0/2063646309 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1467664140 0 0) 0x7f3be8111530 con 0x7f3be8074160 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.228+0000 7f3bed1fe640 1 -- 192.168.123.105:0/2063646309 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3bd0003620 con 0x7f3be8074160 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.229+0000 7f3bed1fe640 1 -- 192.168.123.105:0/2063646309 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 2040651927 0 0) 0x7f3bd0003620 con 0x7f3be8074160 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.229+0000 7f3bed1fe640 1 -- 192.168.123.105:0/2063646309 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3be8112710 con 0x7f3be8074160 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.229+0000 7f3bed1fe640 1 -- 192.168.123.105:0/2063646309 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f3be4002e10 con 0x7f3be8074160 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.229+0000 7f3bed1fe640 1 -- 192.168.123.105:0/2063646309 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f3be4003400 con 0x7f3be8074160 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.229+0000 7f3bed1fe640 1 -- 192.168.123.105:0/2063646309 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f3be4006280 con 0x7f3be8074160 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.230+0000 7f3bef202640 1 -- 192.168.123.105:0/2063646309 >> v1:192.168.123.105:6789/0 conn(0x7f3be8074160 legacy=0x7f3be8074560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.238+0000 7f3bef202640 1 -- 192.168.123.105:0/2063646309 shutdown_connections 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.238+0000 7f3bef202640 1 -- 192.168.123.105:0/2063646309 >> 192.168.123.105:0/2063646309 conn(0x7f3be806f4e0 msgr2=0x7f3be8071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.238+0000 7f3bef202640 1 -- 192.168.123.105:0/2063646309 shutdown_connections 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.238+0000 7f3bef202640 1 -- 192.168.123.105:0/2063646309 wait complete. 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.238+0000 7f3bef202640 1 Processor -- start 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.238+0000 7f3bef202640 1 -- start start 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.238+0000 7f3bef202640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3be81133c0 con 0x7f3be8074160 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.238+0000 7f3bee200640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f3be8074160 0x7f3be81a54c0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39322/0 (socket says 192.168.123.105:39322) 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.238+0000 7f3bee200640 1 -- 192.168.123.105:0/3644978669 learned_addr learned my addr 192.168.123.105:0/3644978669 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.240+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 522106372 0 0) 0x7f3be81133c0 con 0x7f3be8074160 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.240+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3bcc003620 con 0x7f3be8074160 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.240+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3194155565 0 0) 0x7f3bcc003620 con 0x7f3be8074160 2026-03-09T20:18:18.639 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.240+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f3be81133c0 con 0x7f3be8074160 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.240+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f3be4002890 con 0x7f3be8074160 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.240+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2611415919 0 0) 0x7f3be81133c0 con 0x7f3be8074160 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.240+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3be8113590 con 0x7f3be8074160 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.240+0000 7f3bef202640 1 -- 192.168.123.105:0/3644978669 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f3be8113840 con 0x7f3be8074160 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.240+0000 7f3bef202640 1 -- 192.168.123.105:0/3644978669 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f3be81a6ef0 con 0x7f3be8074160 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.241+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f3be4005ea0 con 0x7f3be8074160 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.241+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f3be4005020 con 0x7f3be8074160 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.241+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 9) ==== 50225+0+0 (unknown 3260243651 0 0) 0x7f3be40126a0 con 0x7f3be8074160 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.241+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 --> v1:192.168.123.105:6800/1901557444 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f3bcc042900 con 0x7f3bcc03ec30 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.241+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 2608398624 0 0) 0x7f3be404e430 con 0x7f3be8074160 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.241+0000 7f3bed9ff640 1 -- 192.168.123.105:0/3644978669 >> v1:192.168.123.105:6800/1901557444 conn(0x7f3bcc03ec30 legacy=0x7f3bcc0410f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/1901557444 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.442+0000 7f3bed9ff640 1 -- 192.168.123.105:0/3644978669 >> v1:192.168.123.105:6800/1901557444 conn(0x7f3bcc03ec30 legacy=0x7f3bcc0410f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/1901557444 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:15.842+0000 7f3bed9ff640 1 -- 192.168.123.105:0/3644978669 >> v1:192.168.123.105:6800/1901557444 conn(0x7f3bcc03ec30 legacy=0x7f3bcc0410f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/1901557444 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:16.643+0000 7f3bed9ff640 1 -- 192.168.123.105:0/3644978669 >> v1:192.168.123.105:6800/1901557444 conn(0x7f3bcc03ec30 legacy=0x7f3bcc0410f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/1901557444 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:17.482+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mgrmap(e 10) ==== 50027+0+0 (unknown 670486799 0 0) 0x7f3be404cdf0 con 0x7f3be8074160 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:17.482+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 >> v1:192.168.123.105:6800/1901557444 conn(0x7f3bcc03ec30 legacy=0x7f3bcc0410f0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.484+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mgrmap(e 11) ==== 50119+0+0 (unknown 3485944613 0 0) 0x7f3be404d870 con 0x7f3be8074160 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.484+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 --> v1:192.168.123.105:6800/3290461294 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f3be403ea70 con 0x7f3bcc043720 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.487+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== command_reply(tid 0: 0 ) ==== 8+0+8901 (unknown 0 0 3832181493) 0x7f3be403ea70 con 0x7f3bcc043720 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.491+0000 7f3bef202640 1 -- 192.168.123.105:0/3644978669 --> v1:192.168.123.105:6800/3290461294 -- command(tid 1: {"prefix": "mgr_status"}) -- 0x7f3be8111e80 con 0x7f3bcc043720 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.491+0000 7f3bd77fe640 1 -- 192.168.123.105:0/3644978669 <== mgr.14150 v1:192.168.123.105:6800/3290461294 2 ==== command_reply(tid 1: 0 ) ==== 8+0+52 (unknown 0 0 3086460295) 0x7f3be8111e80 con 0x7f3bcc043720 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.492+0000 7f3bef202640 1 -- 192.168.123.105:0/3644978669 >> v1:192.168.123.105:6800/3290461294 conn(0x7f3bcc043720 legacy=0x7f3bcc045b10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.492+0000 7f3bef202640 1 -- 192.168.123.105:0/3644978669 >> v1:192.168.123.105:6789/0 conn(0x7f3be8074160 legacy=0x7f3be81a54c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.492+0000 7f3bef202640 1 -- 192.168.123.105:0/3644978669 shutdown_connections 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.492+0000 7f3bef202640 1 -- 192.168.123.105:0/3644978669 >> 192.168.123.105:0/3644978669 conn(0x7f3be806f4e0 msgr2=0x7f3be8071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.492+0000 7f3bef202640 1 -- 192.168.123.105:0/3644978669 shutdown_connections 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.492+0000 7f3bef202640 1 -- 192.168.123.105:0/3644978669 wait complete. 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:mgr epoch 9 is available 2026-03-09T20:18:18.640 INFO:teuthology.orchestra.run.vm05.stdout:Generating a dashboard self-signed certificate... 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.798+0000 7fa5c1ca1640 1 Processor -- start 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.800+0000 7fa5c1ca1640 1 -- start start 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.800+0000 7fa5c1ca1640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa5bc10cbe0 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.801+0000 7fa5c0c9f640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fa5bc1087b0 0x7fa5bc108bb0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39424/0 (socket says 192.168.123.105:39424) 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.801+0000 7fa5c0c9f640 1 -- 192.168.123.105:0/3524186030 learned_addr learned my addr 192.168.123.105:0/3524186030 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.801+0000 7fa5b37fe640 1 -- 192.168.123.105:0/3524186030 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3724111374 0 0) 0x7fa5bc10cbe0 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.802+0000 7fa5b37fe640 1 -- 192.168.123.105:0/3524186030 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa5a8003620 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.802+0000 7fa5b37fe640 1 -- 192.168.123.105:0/3524186030 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 1907764353 0 0) 0x7fa5a8003620 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.802+0000 7fa5b37fe640 1 -- 192.168.123.105:0/3524186030 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa5bc10ddc0 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.802+0000 7fa5b37fe640 1 -- 192.168.123.105:0/3524186030 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fa5ac002e10 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.802+0000 7fa5b37fe640 1 -- 192.168.123.105:0/3524186030 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fa5ac003400 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.802+0000 7fa5b37fe640 1 -- 192.168.123.105:0/3524186030 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fa5ac0059d0 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.802+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/3524186030 >> v1:192.168.123.105:6789/0 conn(0x7fa5bc1087b0 legacy=0x7fa5bc108bb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.803+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/3524186030 shutdown_connections 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.803+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/3524186030 >> 192.168.123.105:0/3524186030 conn(0x7fa5bc07bc90 msgr2=0x7fa5bc07c0a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.803+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/3524186030 shutdown_connections 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.803+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/3524186030 wait complete. 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.803+0000 7fa5c1ca1640 1 Processor -- start 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.804+0000 7fa5c1ca1640 1 -- start start 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.804+0000 7fa5c0c9f640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fa5bc1087b0 0x7fa5bc19e320 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39438/0 (socket says 192.168.123.105:39438) 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.804+0000 7fa5c0c9f640 1 -- 192.168.123.105:0/203818805 learned_addr learned my addr 192.168.123.105:0/203818805 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.804+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/203818805 --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa5bc19ea30 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.804+0000 7fa5b1ffb640 1 -- 192.168.123.105:0/203818805 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 737829625 0 0) 0x7fa5bc19ea30 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.804+0000 7fa5b1ffb640 1 -- 192.168.123.105:0/203818805 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa598003620 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.805+0000 7fa5b1ffb640 1 -- 192.168.123.105:0/203818805 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1723175105 0 0) 0x7fa598003620 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.805+0000 7fa5b1ffb640 1 -- 192.168.123.105:0/203818805 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fa5bc19ea30 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.805+0000 7fa5b1ffb640 1 -- 192.168.123.105:0/203818805 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fa5ac005040 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.805+0000 7fa5b1ffb640 1 -- 192.168.123.105:0/203818805 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 754722219 0 0) 0x7fa5bc19ea30 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.805+0000 7fa5b1ffb640 1 -- 192.168.123.105:0/203818805 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa5bc19ec00 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.805+0000 7fa5b1ffb640 1 -- 192.168.123.105:0/203818805 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fa5ac002890 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.805+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/203818805 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fa5bc19eef0 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.805+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/203818805 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fa5bc1a2a40 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.806+0000 7fa5b1ffb640 1 -- 192.168.123.105:0/203818805 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fa5ac006200 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.806+0000 7fa5b1ffb640 1 -- 192.168.123.105:0/203818805 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 11) ==== 50119+0+0 (unknown 3485944613 0 0) 0x7fa5ac012830 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.806+0000 7fa5b1ffb640 1 -- 192.168.123.105:0/203818805 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 2236241792 0 0) 0x7fa5ac04d1d0 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.807+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/203818805 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa584005180 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.810+0000 7fa5b1ffb640 1 -- 192.168.123.105:0/203818805 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fa5ac018870 con 0x7fa5bc1087b0 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:18.907+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/203818805 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}) -- 0x7fa584002bf0 con 0x7fa59803ec70 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.024+0000 7fa5b1ffb640 1 -- 192.168.123.105:0/203818805 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 3317752739) 0x7fa584002bf0 con 0x7fa59803ec70 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.027+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/203818805 >> v1:192.168.123.105:6800/3290461294 conn(0x7fa59803ec70 legacy=0x7fa598041130 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.027+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/203818805 >> v1:192.168.123.105:6789/0 conn(0x7fa5bc1087b0 legacy=0x7fa5bc19e320 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.028+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/203818805 shutdown_connections 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.028+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/203818805 >> 192.168.123.105:0/203818805 conn(0x7fa5bc07bc90 msgr2=0x7fa5bc105770 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.028+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/203818805 shutdown_connections 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.028+0000 7fa5c1ca1640 1 -- 192.168.123.105:0/203818805 wait complete. 2026-03-09T20:18:19.201 INFO:teuthology.orchestra.run.vm05.stdout:Creating initial admin user... 2026-03-09T20:18:19.266 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:19 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:19.266 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:19 vm05 ceph-mon[51870]: [09/Mar/2026:20:18:18] ENGINE Bus STARTING 2026-03-09T20:18:19.266 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:19 vm05 ceph-mon[51870]: [09/Mar/2026:20:18:18] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T20:18:19.267 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:19 vm05 ceph-mon[51870]: [09/Mar/2026:20:18:18] ENGINE Client ('192.168.123.105', 60484) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:18:19.267 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:19 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:19.267 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:19 vm05 ceph-mon[51870]: mgrmap e11: y(active, since 1.00898s) 2026-03-09T20:18:19.267 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:19 vm05 ceph-mon[51870]: from='client.14154 v1:192.168.123.105:0/3644978669' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:18:19.267 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:19 vm05 ceph-mon[51870]: from='client.14154 v1:192.168.123.105:0/3644978669' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:18:19.267 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:19 vm05 ceph-mon[51870]: [09/Mar/2026:20:18:18] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T20:18:19.267 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:19 vm05 ceph-mon[51870]: [09/Mar/2026:20:18:18] ENGINE Bus STARTED 2026-03-09T20:18:19.267 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:19 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:19.267 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:19 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$g3s6/Dr7NnVdVg.UmUajaePgPlJCObI8dCLGEt5DaFuJuV/t/Fee6", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773087499, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.353+0000 7fa806b31640 1 Processor -- start 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.353+0000 7fa806b31640 1 -- start start 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.353+0000 7fa806b31640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa80010ab00 con 0x7fa8001066d0 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.354+0000 7fa8048a6640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fa8001066d0 0x7fa800106ad0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39440/0 (socket says 192.168.123.105:39440) 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.354+0000 7fa8048a6640 1 -- 192.168.123.105:0/3905490620 learned_addr learned my addr 192.168.123.105:0/3905490620 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.354+0000 7fa7f77fe640 1 -- 192.168.123.105:0/3905490620 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4192959504 0 0) 0x7fa80010ab00 con 0x7fa8001066d0 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.354+0000 7fa7f77fe640 1 -- 192.168.123.105:0/3905490620 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa7d8003620 con 0x7fa8001066d0 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.354+0000 7fa7f77fe640 1 -- 192.168.123.105:0/3905490620 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3170093361 0 0) 0x7fa7d8003620 con 0x7fa8001066d0 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.354+0000 7fa7f77fe640 1 -- 192.168.123.105:0/3905490620 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa80010bce0 con 0x7fa8001066d0 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.354+0000 7fa7f77fe640 1 -- 192.168.123.105:0/3905490620 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fa7e8002e10 con 0x7fa8001066d0 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.355+0000 7fa7f77fe640 1 -- 192.168.123.105:0/3905490620 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fa7e80034c0 con 0x7fa8001066d0 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.355+0000 7fa806b31640 1 -- 192.168.123.105:0/3905490620 >> v1:192.168.123.105:6789/0 conn(0x7fa8001066d0 legacy=0x7fa800106ad0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.355+0000 7fa806b31640 1 -- 192.168.123.105:0/3905490620 shutdown_connections 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.355+0000 7fa806b31640 1 -- 192.168.123.105:0/3905490620 >> 192.168.123.105:0/3905490620 conn(0x7fa800101e40 msgr2=0x7fa8001042a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.355+0000 7fa806b31640 1 -- 192.168.123.105:0/3905490620 shutdown_connections 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.355+0000 7fa806b31640 1 -- 192.168.123.105:0/3905490620 wait complete. 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.356+0000 7fa806b31640 1 Processor -- start 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.356+0000 7fa806b31640 1 -- start start 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.356+0000 7fa806b31640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa80019ec70 con 0x7fa8001066d0 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.356+0000 7fa8048a6640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fa8001066d0 0x7fa80019e560 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:39452/0 (socket says 192.168.123.105:39452) 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.356+0000 7fa8048a6640 1 -- 192.168.123.105:0/402688010 learned_addr learned my addr 192.168.123.105:0/402688010 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.356+0000 7fa7f5ffb640 1 -- 192.168.123.105:0/402688010 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1406480590 0 0) 0x7fa80019ec70 con 0x7fa8001066d0 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.356+0000 7fa7f5ffb640 1 -- 192.168.123.105:0/402688010 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa7d0003620 con 0x7fa8001066d0 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.357+0000 7fa7f5ffb640 1 -- 192.168.123.105:0/402688010 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3260065265 0 0) 0x7fa7d0003620 con 0x7fa8001066d0 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.357+0000 7fa7f5ffb640 1 -- 192.168.123.105:0/402688010 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fa80019ec70 con 0x7fa8001066d0 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.357+0000 7fa7f5ffb640 1 -- 192.168.123.105:0/402688010 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fa7e80031f0 con 0x7fa8001066d0 2026-03-09T20:18:19.933 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.357+0000 7fa7f5ffb640 1 -- 192.168.123.105:0/402688010 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3340034603 0 0) 0x7fa80019ec70 con 0x7fa8001066d0 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.357+0000 7fa7f5ffb640 1 -- 192.168.123.105:0/402688010 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa80019ee40 con 0x7fa8001066d0 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.357+0000 7fa7f5ffb640 1 -- 192.168.123.105:0/402688010 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fa7e80027b0 con 0x7fa8001066d0 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.357+0000 7fa806b31640 1 -- 192.168.123.105:0/402688010 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fa80019f150 con 0x7fa8001066d0 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.357+0000 7fa7f5ffb640 1 -- 192.168.123.105:0/402688010 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fa7e8006200 con 0x7fa8001066d0 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.358+0000 7fa806b31640 1 -- 192.168.123.105:0/402688010 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fa8001a2ce0 con 0x7fa8001066d0 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.358+0000 7fa7f5ffb640 1 -- 192.168.123.105:0/402688010 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 11) ==== 50119+0+0 (unknown 3485944613 0 0) 0x7fa7e8004ab0 con 0x7fa8001066d0 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.358+0000 7fa806b31640 1 -- 192.168.123.105:0/402688010 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa80010b910 con 0x7fa8001066d0 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.358+0000 7fa7f5ffb640 1 -- 192.168.123.105:0/402688010 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 2236241792 0 0) 0x7fa7e804d660 con 0x7fa8001066d0 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.361+0000 7fa7f5ffb640 1 -- 192.168.123.105:0/402688010 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fa7e8017e10 con 0x7fa8001066d0 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.463+0000 7fa806b31640 1 -- 192.168.123.105:0/402688010 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}) -- 0x7fa8001a30e0 con 0x7fa7d003eb40 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.659+0000 7fa7f5ffb640 1 -- 192.168.123.105:0/402688010 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+252 (unknown 0 0 4260809440) 0x7fa8001a30e0 con 0x7fa7d003eb40 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.661+0000 7fa806b31640 1 -- 192.168.123.105:0/402688010 >> v1:192.168.123.105:6800/3290461294 conn(0x7fa7d003eb40 legacy=0x7fa7d0041000 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.661+0000 7fa806b31640 1 -- 192.168.123.105:0/402688010 >> v1:192.168.123.105:6789/0 conn(0x7fa8001066d0 legacy=0x7fa80019e560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.661+0000 7fa806b31640 1 -- 192.168.123.105:0/402688010 shutdown_connections 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.661+0000 7fa806b31640 1 -- 192.168.123.105:0/402688010 >> 192.168.123.105:0/402688010 conn(0x7fa800101e40 msgr2=0x7fa800102290 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.661+0000 7fa806b31640 1 -- 192.168.123.105:0/402688010 shutdown_connections 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:19.661+0000 7fa806b31640 1 -- 192.168.123.105:0/402688010 wait complete. 2026-03-09T20:18:19.934 INFO:teuthology.orchestra.run.vm05.stdout:Fetching dashboard port number... 2026-03-09T20:18:20.186 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:20 vm05 ceph-mon[51870]: from='client.14162 v1:192.168.123.105:0/203818805' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:20.186 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:20 vm05 ceph-mon[51870]: from='client.14164 v1:192.168.123.105:0/402688010' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:20.186 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:20 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:20.341 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 8443 2026-03-09T20:18:20.341 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.074+0000 7f6e685ba640 1 Processor -- start 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.075+0000 7f6e685ba640 1 -- start start 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.075+0000 7f6e685ba640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6e6010ab20 con 0x7f6e601066f0 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.075+0000 7f6e6632f640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f6e601066f0 0x7f6e60106af0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:49484/0 (socket says 192.168.123.105:49484) 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.075+0000 7f6e6632f640 1 -- 192.168.123.105:0/3443865377 learned_addr learned my addr 192.168.123.105:0/3443865377 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.076+0000 7f6e6532d640 1 -- 192.168.123.105:0/3443865377 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 753781072 0 0) 0x7f6e6010ab20 con 0x7f6e601066f0 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.076+0000 7f6e6532d640 1 -- 192.168.123.105:0/3443865377 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6e48003620 con 0x7f6e601066f0 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.076+0000 7f6e6532d640 1 -- 192.168.123.105:0/3443865377 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3079534332 0 0) 0x7f6e48003620 con 0x7f6e601066f0 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.076+0000 7f6e6532d640 1 -- 192.168.123.105:0/3443865377 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6e6010bd00 con 0x7f6e601066f0 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.076+0000 7f6e6532d640 1 -- 192.168.123.105:0/3443865377 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f6e54002e10 con 0x7f6e601066f0 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.076+0000 7f6e6532d640 1 -- 192.168.123.105:0/3443865377 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f6e54003400 con 0x7f6e601066f0 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.077+0000 7f6e6532d640 1 -- 192.168.123.105:0/3443865377 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f6e540059d0 con 0x7f6e601066f0 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.077+0000 7f6e685ba640 1 -- 192.168.123.105:0/3443865377 >> v1:192.168.123.105:6789/0 conn(0x7f6e601066f0 legacy=0x7f6e60106af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.077+0000 7f6e685ba640 1 -- 192.168.123.105:0/3443865377 shutdown_connections 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.077+0000 7f6e685ba640 1 -- 192.168.123.105:0/3443865377 >> 192.168.123.105:0/3443865377 conn(0x7f6e60101e60 msgr2=0x7f6e601042c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.077+0000 7f6e685ba640 1 -- 192.168.123.105:0/3443865377 shutdown_connections 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.077+0000 7f6e685ba640 1 -- 192.168.123.105:0/3443865377 wait complete. 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.078+0000 7f6e685ba640 1 Processor -- start 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.078+0000 7f6e685ba640 1 -- start start 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.078+0000 7f6e685ba640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6e60196e10 con 0x7f6e601066f0 2026-03-09T20:18:20.342 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.078+0000 7f6e6632f640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f6e601066f0 0x7f6e60196700 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:49494/0 (socket says 192.168.123.105:49494) 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.078+0000 7f6e6632f640 1 -- 192.168.123.105:0/586679507 learned_addr learned my addr 192.168.123.105:0/586679507 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.079+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1325512760 0 0) 0x7f6e60196e10 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.079+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6e34003620 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.079+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 543956266 0 0) 0x7f6e34003620 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.079+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6e60196e10 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.080+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f6e540050b0 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.080+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1308194951 0 0) 0x7f6e60196e10 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.080+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6e60196fe0 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.080+0000 7f6e685ba640 1 -- 192.168.123.105:0/586679507 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f6e60193240 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.080+0000 7f6e685ba640 1 -- 192.168.123.105:0/586679507 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f6e60193780 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.081+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f6e54002ce0 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.082+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f6e54005e60 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.082+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 11) ==== 50119+0+0 (unknown 3485944613 0 0) 0x7f6e540124b0 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.082+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 2236241792 0 0) 0x7f6e5404dff0 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.082+0000 7f6e685ba640 1 -- 192.168.123.105:0/586679507 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6e6010b930 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.085+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f6e540189c0 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.091+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mgrmap(e 12) ==== 50225+0+0 (unknown 1219069973 0 0) 0x7f6e540139b0 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.185+0000 7f6e685ba640 1 -- 192.168.123.105:0/586679507 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"} v 0) -- 0x7f6e60193a70 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.186+0000 7f6e4f7fe640 1 -- 192.168.123.105:0/586679507 <== mon.0 v1:192.168.123.105:6789/0 11 ==== mon_command_ack([{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]=0 v8) ==== 112+0+5 (unknown 3713421687 0 83753974) 0x7f6e60193a70 con 0x7f6e601066f0 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.189+0000 7f6e685ba640 1 -- 192.168.123.105:0/586679507 >> v1:192.168.123.105:6800/3290461294 conn(0x7f6e3403ece0 legacy=0x7f6e340411a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.189+0000 7f6e685ba640 1 -- 192.168.123.105:0/586679507 >> v1:192.168.123.105:6789/0 conn(0x7f6e601066f0 legacy=0x7f6e60196700 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.190+0000 7f6e685ba640 1 -- 192.168.123.105:0/586679507 shutdown_connections 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.190+0000 7f6e685ba640 1 -- 192.168.123.105:0/586679507 >> 192.168.123.105:0/586679507 conn(0x7f6e60101e60 msgr2=0x7f6e601022b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:20.343 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.190+0000 7f6e685ba640 1 -- 192.168.123.105:0/586679507 shutdown_connections 2026-03-09T20:18:20.344 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.190+0000 7f6e685ba640 1 -- 192.168.123.105:0/586679507 wait complete. 2026-03-09T20:18:20.344 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-09T20:18:20.344 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T20:18:20.345 INFO:teuthology.orchestra.run.vm05.stdout:Ceph Dashboard is now available at: 2026-03-09T20:18:20.345 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:18:20.345 INFO:teuthology.orchestra.run.vm05.stdout: URL: https://vm05.local:8443/ 2026-03-09T20:18:20.345 INFO:teuthology.orchestra.run.vm05.stdout: User: admin 2026-03-09T20:18:20.345 INFO:teuthology.orchestra.run.vm05.stdout: Password: 1pv36vfya6 2026-03-09T20:18:20.345 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:18:20.345 INFO:teuthology.orchestra.run.vm05.stdout:Saving cluster configuration to /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config directory 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.482+0000 7f3f52aa9640 1 Processor -- start 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.482+0000 7f3f52aa9640 1 -- start start 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.483+0000 7f3f52aa9640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3f4c10ab00 con 0x7f3f4c1066d0 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.483+0000 7f3f5081e640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f3f4c1066d0 0x7f3f4c106ad0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:49496/0 (socket says 192.168.123.105:49496) 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.483+0000 7f3f5081e640 1 -- 192.168.123.105:0/4267216375 learned_addr learned my addr 192.168.123.105:0/4267216375 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.483+0000 7f3f437fe640 1 -- 192.168.123.105:0/4267216375 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 793028405 0 0) 0x7f3f4c10ab00 con 0x7f3f4c1066d0 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.483+0000 7f3f437fe640 1 -- 192.168.123.105:0/4267216375 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3f38003620 con 0x7f3f4c1066d0 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.484+0000 7f3f437fe640 1 -- 192.168.123.105:0/4267216375 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3148129638 0 0) 0x7f3f38003620 con 0x7f3f4c1066d0 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.484+0000 7f3f437fe640 1 -- 192.168.123.105:0/4267216375 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3f4c10bce0 con 0x7f3f4c1066d0 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.484+0000 7f3f437fe640 1 -- 192.168.123.105:0/4267216375 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f3f34002e10 con 0x7f3f4c1066d0 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.484+0000 7f3f437fe640 1 -- 192.168.123.105:0/4267216375 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f3f340034c0 con 0x7f3f4c1066d0 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.484+0000 7f3f437fe640 1 -- 192.168.123.105:0/4267216375 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f3f340059d0 con 0x7f3f4c1066d0 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.485+0000 7f3f52aa9640 1 -- 192.168.123.105:0/4267216375 >> v1:192.168.123.105:6789/0 conn(0x7f3f4c1066d0 legacy=0x7f3f4c106ad0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.485+0000 7f3f52aa9640 1 -- 192.168.123.105:0/4267216375 shutdown_connections 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.485+0000 7f3f52aa9640 1 -- 192.168.123.105:0/4267216375 >> 192.168.123.105:0/4267216375 conn(0x7f3f4c101e40 msgr2=0x7f3f4c1042a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.485+0000 7f3f52aa9640 1 -- 192.168.123.105:0/4267216375 shutdown_connections 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.485+0000 7f3f52aa9640 1 -- 192.168.123.105:0/4267216375 wait complete. 2026-03-09T20:18:20.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.486+0000 7f3f52aa9640 1 Processor -- start 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.486+0000 7f3f52aa9640 1 -- start start 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.486+0000 7f3f52aa9640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3f4c1a2350 con 0x7f3f4c1066d0 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.486+0000 7f3f5081e640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f3f4c1066d0 0x7f3f4c07c480 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:49506/0 (socket says 192.168.123.105:49506) 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.486+0000 7f3f5081e640 1 -- 192.168.123.105:0/2481448012 learned_addr learned my addr 192.168.123.105:0/2481448012 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.486+0000 7f3f41ffb640 1 -- 192.168.123.105:0/2481448012 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 992821413 0 0) 0x7f3f4c1a2350 con 0x7f3f4c1066d0 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.486+0000 7f3f41ffb640 1 -- 192.168.123.105:0/2481448012 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3f28003620 con 0x7f3f4c1066d0 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.487+0000 7f3f41ffb640 1 -- 192.168.123.105:0/2481448012 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2298113056 0 0) 0x7f3f28003620 con 0x7f3f4c1066d0 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.487+0000 7f3f41ffb640 1 -- 192.168.123.105:0/2481448012 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f3f4c1a2350 con 0x7f3f4c1066d0 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.487+0000 7f3f41ffb640 1 -- 192.168.123.105:0/2481448012 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f3f340048e0 con 0x7f3f4c1066d0 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.487+0000 7f3f41ffb640 1 -- 192.168.123.105:0/2481448012 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2034159280 0 0) 0x7f3f4c1a2350 con 0x7f3f4c1066d0 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.487+0000 7f3f41ffb640 1 -- 192.168.123.105:0/2481448012 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3f4c1a3530 con 0x7f3f4c1066d0 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.487+0000 7f3f41ffb640 1 -- 192.168.123.105:0/2481448012 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f3f34002890 con 0x7f3f4c1066d0 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.488+0000 7f3f41ffb640 1 -- 192.168.123.105:0/2481448012 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f3f34006150 con 0x7f3f4c1066d0 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.488+0000 7f3f52aa9640 1 -- 192.168.123.105:0/2481448012 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f3f4c1a2520 con 0x7f3f4c1066d0 2026-03-09T20:18:20.779 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.488+0000 7f3f52aa9640 1 -- 192.168.123.105:0/2481448012 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f3f4c1a2a60 con 0x7f3f4c1066d0 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.489+0000 7f3f41ffb640 1 -- 192.168.123.105:0/2481448012 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 12) ==== 50225+0+0 (unknown 1219069973 0 0) 0x7f3f34002a40 con 0x7f3f4c1066d0 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.489+0000 7f3f52aa9640 1 -- 192.168.123.105:0/2481448012 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3f18005180 con 0x7f3f4c1066d0 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.490+0000 7f3f41ffb640 1 -- 192.168.123.105:0/2481448012 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 2236241792 0 0) 0x7f3f3404d480 con 0x7f3f4c1066d0 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.492+0000 7f3f41ffb640 1 -- 192.168.123.105:0/2481448012 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f3f34017ab0 con 0x7f3f4c1066d0 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.625+0000 7f3f52aa9640 1 -- 192.168.123.105:0/2481448012 --> v1:192.168.123.105:6789/0 -- mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) -- 0x7f3f18005470 con 0x7f3f4c1066d0 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.628+0000 7f3f41ffb640 1 -- 192.168.123.105:0/2481448012 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{prefix=config-key set, key=mgr/dashboard/cluster/status}]=0 set mgr/dashboard/cluster/status v28)=0 set mgr/dashboard/cluster/status v28) ==== 153+0+0 (unknown 1169358022 0 0) 0x7f3f340173b0 con 0x7f3f4c1066d0 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.630+0000 7f3f52aa9640 1 -- 192.168.123.105:0/2481448012 >> v1:192.168.123.105:6800/3290461294 conn(0x7f3f2803e910 legacy=0x7f3f28040dd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.630+0000 7f3f52aa9640 1 -- 192.168.123.105:0/2481448012 >> v1:192.168.123.105:6789/0 conn(0x7f3f4c1066d0 legacy=0x7f3f4c07c480 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.630+0000 7f3f52aa9640 1 -- 192.168.123.105:0/2481448012 shutdown_connections 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.630+0000 7f3f52aa9640 1 -- 192.168.123.105:0/2481448012 >> 192.168.123.105:0/2481448012 conn(0x7f3f4c101e40 msgr2=0x7f3f4c1042a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.631+0000 7f3f52aa9640 1 -- 192.168.123.105:0/2481448012 shutdown_connections 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T20:18:20.631+0000 7f3f52aa9640 1 -- 192.168.123.105:0/2481448012 wait complete. 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout: sudo /sbin/cephadm shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:Or, if you are only running a single cluster on this host: 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout: sudo /sbin/cephadm shell 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout: ceph telemetry on 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:For more information see: 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:18:20.780 INFO:teuthology.orchestra.run.vm05.stdout:Bootstrap complete. 2026-03-09T20:18:20.813 INFO:tasks.cephadm:Fetching config... 2026-03-09T20:18:20.813 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:18:20.813 DEBUG:teuthology.orchestra.run.vm05:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T20:18:20.872 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T20:18:20.872 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:18:20.872 DEBUG:teuthology.orchestra.run.vm05:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T20:18:20.928 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T20:18:20.928 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:18:20.928 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/keyring of=/dev/stdout 2026-03-09T20:18:21.001 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T20:18:21.001 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:18:21.001 DEBUG:teuthology.orchestra.run.vm05:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T20:18:21.070 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T20:18:21.070 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIaab6ZwLVI101Eqfehv3Q++OzlE71QnVFqrltWWyoHB ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T20:18:21.153 INFO:teuthology.orchestra.run.vm05.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIaab6ZwLVI101Eqfehv3Q++OzlE71QnVFqrltWWyoHB ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:21.163 DEBUG:teuthology.orchestra.run.vm09:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIaab6ZwLVI101Eqfehv3Q++OzlE71QnVFqrltWWyoHB ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T20:18:21.199 INFO:teuthology.orchestra.run.vm09.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIaab6ZwLVI101Eqfehv3Q++OzlE71QnVFqrltWWyoHB ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:21.211 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T20:18:21.395 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:18:21.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:21 vm05 ceph-mon[51870]: mgrmap e12: y(active, since 2s) 2026-03-09T20:18:21.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/586679507' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T20:18:21.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2481448012' entity='client.admin' 2026-03-09T20:18:21.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.537+0000 7f501e18e640 1 -- 192.168.123.105:0/2505161812 >> v1:192.168.123.105:6789/0 conn(0x7f501806b6e0 legacy=0x7f5018104c10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:21.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.539+0000 7f501e18e640 1 -- 192.168.123.105:0/2505161812 shutdown_connections 2026-03-09T20:18:21.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.539+0000 7f501e18e640 1 -- 192.168.123.105:0/2505161812 >> 192.168.123.105:0/2505161812 conn(0x7f50180fc6c0 msgr2=0x7f50180feb00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:21.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.539+0000 7f501e18e640 1 -- 192.168.123.105:0/2505161812 shutdown_connections 2026-03-09T20:18:21.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.539+0000 7f501e18e640 1 -- 192.168.123.105:0/2505161812 wait complete. 2026-03-09T20:18:21.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.539+0000 7f501e18e640 1 Processor -- start 2026-03-09T20:18:21.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.540+0000 7f501e18e640 1 -- start start 2026-03-09T20:18:21.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.540+0000 7f501e18e640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f501819f8c0 con 0x7f501806b6e0 2026-03-09T20:18:21.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.540+0000 7f501d18c640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f501806b6e0 0x7f501819f1b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:49522/0 (socket says 192.168.123.105:49522) 2026-03-09T20:18:21.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.540+0000 7f501d18c640 1 -- 192.168.123.105:0/2124429187 learned_addr learned my addr 192.168.123.105:0/2124429187 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:21.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.540+0000 7f500e7fc640 1 -- 192.168.123.105:0/2124429187 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4120989724 0 0) 0x7f501819f8c0 con 0x7f501806b6e0 2026-03-09T20:18:21.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.540+0000 7f500e7fc640 1 -- 192.168.123.105:0/2124429187 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4fec003620 con 0x7f501806b6e0 2026-03-09T20:18:21.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.540+0000 7f500e7fc640 1 -- 192.168.123.105:0/2124429187 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1358278431 0 0) 0x7f4fec003620 con 0x7f501806b6e0 2026-03-09T20:18:21.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.540+0000 7f500e7fc640 1 -- 192.168.123.105:0/2124429187 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f501819f8c0 con 0x7f501806b6e0 2026-03-09T20:18:21.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.540+0000 7f500e7fc640 1 -- 192.168.123.105:0/2124429187 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f5008003270 con 0x7f501806b6e0 2026-03-09T20:18:21.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.541+0000 7f500e7fc640 1 -- 192.168.123.105:0/2124429187 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3447806320 0 0) 0x7f501819f8c0 con 0x7f501806b6e0 2026-03-09T20:18:21.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.541+0000 7f500e7fc640 1 -- 192.168.123.105:0/2124429187 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f501819fa90 con 0x7f501806b6e0 2026-03-09T20:18:21.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.541+0000 7f501e18e640 1 -- 192.168.123.105:0/2124429187 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f501819fda0 con 0x7f501806b6e0 2026-03-09T20:18:21.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.541+0000 7f501e18e640 1 -- 192.168.123.105:0/2124429187 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f50181a3930 con 0x7f501806b6e0 2026-03-09T20:18:21.541 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.541+0000 7f501e18e640 1 -- 192.168.123.105:0/2124429187 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f50181085e0 con 0x7f501806b6e0 2026-03-09T20:18:21.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.541+0000 7f500e7fc640 1 -- 192.168.123.105:0/2124429187 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f5008002a10 con 0x7f501806b6e0 2026-03-09T20:18:21.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.546+0000 7f500e7fc640 1 -- 192.168.123.105:0/2124429187 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f5008004c10 con 0x7f501806b6e0 2026-03-09T20:18:21.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.546+0000 7f500e7fc640 1 -- 192.168.123.105:0/2124429187 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 12) ==== 50225+0+0 (unknown 1219069973 0 0) 0x7f5008004ef0 con 0x7f501806b6e0 2026-03-09T20:18:21.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.546+0000 7f500e7fc640 1 -- 192.168.123.105:0/2124429187 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 2236241792 0 0) 0x7f500804e140 con 0x7f501806b6e0 2026-03-09T20:18:21.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.546+0000 7f500e7fc640 1 -- 192.168.123.105:0/2124429187 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f50080504c0 con 0x7f501806b6e0 2026-03-09T20:18:21.642 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.642+0000 7f501e18e640 1 -- 192.168.123.105:0/2124429187 --> v1:192.168.123.105:6789/0 -- mon_command([{prefix=config set, name=mgr/cephadm/allow_ptrace}] v 0) -- 0x7f50181a3c20 con 0x7f501806b6e0 2026-03-09T20:18:21.647 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.647+0000 7f500e7fc640 1 -- 192.168.123.105:0/2124429187 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/allow_ptrace}]=0 v9)=0 v9) ==== 125+0+0 (unknown 3028693289 0 0) 0x7f50080187f0 con 0x7f501806b6e0 2026-03-09T20:18:21.652 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.653+0000 7f501e18e640 1 -- 192.168.123.105:0/2124429187 >> v1:192.168.123.105:6800/3290461294 conn(0x7f4fec03e900 legacy=0x7f4fec040dc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:21.652 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.653+0000 7f501e18e640 1 -- 192.168.123.105:0/2124429187 >> v1:192.168.123.105:6789/0 conn(0x7f501806b6e0 legacy=0x7f501819f1b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:21.653 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.653+0000 7f501e18e640 1 -- 192.168.123.105:0/2124429187 shutdown_connections 2026-03-09T20:18:21.653 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.653+0000 7f501e18e640 1 -- 192.168.123.105:0/2124429187 >> 192.168.123.105:0/2124429187 conn(0x7f50180fc6c0 msgr2=0x7f5018100df0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:21.653 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.653+0000 7f501e18e640 1 -- 192.168.123.105:0/2124429187 shutdown_connections 2026-03-09T20:18:21.653 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:21.653+0000 7f501e18e640 1 -- 192.168.123.105:0/2124429187 wait complete. 2026-03-09T20:18:21.803 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T20:18:21.804 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T20:18:22.052 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:18:22.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.180+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2622770162 >> v1:192.168.123.105:6789/0 conn(0x7f5a180770a0 legacy=0x7f5a18075500 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:22.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.181+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2622770162 shutdown_connections 2026-03-09T20:18:22.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.181+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2622770162 >> 192.168.123.105:0/2622770162 conn(0x7f5a180fd820 msgr2=0x7f5a180ffc40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:22.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.181+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2622770162 shutdown_connections 2026-03-09T20:18:22.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.181+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2622770162 wait complete. 2026-03-09T20:18:22.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.182+0000 7f5a1c83e640 1 Processor -- start 2026-03-09T20:18:22.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.182+0000 7f5a1c83e640 1 -- start start 2026-03-09T20:18:22.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.182+0000 7f5a1c83e640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5a1819b3b0 con 0x7f5a180770a0 2026-03-09T20:18:22.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.182+0000 7f5a177fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f5a180770a0 0x7f5a1819aca0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:49534/0 (socket says 192.168.123.105:49534) 2026-03-09T20:18:22.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.182+0000 7f5a177fe640 1 -- 192.168.123.105:0/2398881420 learned_addr learned my addr 192.168.123.105:0/2398881420 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:22.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.183+0000 7f5a14ff9640 1 -- 192.168.123.105:0/2398881420 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1178919078 0 0) 0x7f5a1819b3b0 con 0x7f5a180770a0 2026-03-09T20:18:22.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.183+0000 7f5a14ff9640 1 -- 192.168.123.105:0/2398881420 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f59ec003620 con 0x7f5a180770a0 2026-03-09T20:18:22.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.183+0000 7f5a14ff9640 1 -- 192.168.123.105:0/2398881420 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2102432020 0 0) 0x7f59ec003620 con 0x7f5a180770a0 2026-03-09T20:18:22.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.183+0000 7f5a14ff9640 1 -- 192.168.123.105:0/2398881420 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f5a1819b3b0 con 0x7f5a180770a0 2026-03-09T20:18:22.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.183+0000 7f5a14ff9640 1 -- 192.168.123.105:0/2398881420 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f5a080031c0 con 0x7f5a180770a0 2026-03-09T20:18:22.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.184+0000 7f5a14ff9640 1 -- 192.168.123.105:0/2398881420 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1653786252 0 0) 0x7f5a1819b3b0 con 0x7f5a180770a0 2026-03-09T20:18:22.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.184+0000 7f5a14ff9640 1 -- 192.168.123.105:0/2398881420 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5a1819b580 con 0x7f5a180770a0 2026-03-09T20:18:22.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.184+0000 7f5a14ff9640 1 -- 192.168.123.105:0/2398881420 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f5a08004710 con 0x7f5a180770a0 2026-03-09T20:18:22.184 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.184+0000 7f5a14ff9640 1 -- 192.168.123.105:0/2398881420 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f5a08005030 con 0x7f5a180770a0 2026-03-09T20:18:22.184 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.185+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2398881420 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f5a1819b890 con 0x7f5a180770a0 2026-03-09T20:18:22.184 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.185+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2398881420 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f5a1819f420 con 0x7f5a180770a0 2026-03-09T20:18:22.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.186+0000 7f5a14ff9640 1 -- 192.168.123.105:0/2398881420 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 12) ==== 50225+0+0 (unknown 1219069973 0 0) 0x7f5a080028e0 con 0x7f5a180770a0 2026-03-09T20:18:22.186 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.186+0000 7f5a14ff9640 1 -- 192.168.123.105:0/2398881420 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 2236241792 0 0) 0x7f5a0804d2c0 con 0x7f5a180770a0 2026-03-09T20:18:22.186 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.187+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2398881420 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5a1810a860 con 0x7f5a180770a0 2026-03-09T20:18:22.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.190+0000 7f5a14ff9640 1 -- 192.168.123.105:0/2398881420 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f5a08017950 con 0x7f5a180770a0 2026-03-09T20:18:22.301 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.300+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2398881420 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}) -- 0x7f5a181082e0 con 0x7f59ec03ec10 2026-03-09T20:18:22.306 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.306+0000 7f5a14ff9640 1 -- 192.168.123.105:0/2398881420 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (unknown 0 0 0) 0x7f5a181082e0 con 0x7f59ec03ec10 2026-03-09T20:18:22.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.313+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2398881420 >> v1:192.168.123.105:6800/3290461294 conn(0x7f59ec03ec10 legacy=0x7f59ec0410d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:22.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.313+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2398881420 >> v1:192.168.123.105:6789/0 conn(0x7f5a180770a0 legacy=0x7f5a1819aca0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:22.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.315+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2398881420 shutdown_connections 2026-03-09T20:18:22.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.315+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2398881420 >> 192.168.123.105:0/2398881420 conn(0x7f5a180fd820 msgr2=0x7f5a180ffa50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:22.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.315+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2398881420 shutdown_connections 2026-03-09T20:18:22.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:22.315+0000 7f5a1c83e640 1 -- 192.168.123.105:0/2398881420 wait complete. 2026-03-09T20:18:22.511 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm09 2026-03-09T20:18:22.511 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T20:18:22.511 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.conf 2026-03-09T20:18:22.527 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T20:18:22.527 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:18:22.586 INFO:tasks.cephadm:Adding host vm09 to orchestrator... 2026-03-09T20:18:22.586 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch host add vm09 2026-03-09T20:18:22.903 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2124429187' entity='client.admin' 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: from='client.14172 v1:192.168.123.105:0/2398881420' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T20:18:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:22 vm05 ceph-mon[51870]: Updating vm05:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:18:23.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.102+0000 7f77611eb640 1 -- 192.168.123.105:0/382287211 >> v1:192.168.123.105:6789/0 conn(0x7f775c074e50 legacy=0x7f775c073370 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:23.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.104+0000 7f77611eb640 1 -- 192.168.123.105:0/382287211 shutdown_connections 2026-03-09T20:18:23.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.104+0000 7f77611eb640 1 -- 192.168.123.105:0/382287211 >> 192.168.123.105:0/382287211 conn(0x7f775c06ef70 msgr2=0x7f775c0713b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:23.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.104+0000 7f77611eb640 1 -- 192.168.123.105:0/382287211 shutdown_connections 2026-03-09T20:18:23.106 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.105+0000 7f77611eb640 1 -- 192.168.123.105:0/382287211 wait complete. 2026-03-09T20:18:23.106 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.106+0000 7f77611eb640 1 Processor -- start 2026-03-09T20:18:23.106 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.106+0000 7f77611eb640 1 -- start start 2026-03-09T20:18:23.106 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.106+0000 7f77611eb640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f775c118bd0 con 0x7f775c074e50 2026-03-09T20:18:23.106 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.106+0000 7f775bfff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f775c074e50 0x7f775c1184c0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:49552/0 (socket says 192.168.123.105:49552) 2026-03-09T20:18:23.106 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.106+0000 7f775bfff640 1 -- 192.168.123.105:0/3565297751 learned_addr learned my addr 192.168.123.105:0/3565297751 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:23.107 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.107+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3335930348 0 0) 0x7f775c118bd0 con 0x7f775c074e50 2026-03-09T20:18:23.107 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.107+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f772c003620 con 0x7f775c074e50 2026-03-09T20:18:23.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.108+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2140232170 0 0) 0x7f772c003620 con 0x7f775c074e50 2026-03-09T20:18:23.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.108+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f775c118bd0 con 0x7f775c074e50 2026-03-09T20:18:23.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.108+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f7750003ce0 con 0x7f775c074e50 2026-03-09T20:18:23.110 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.109+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1570218868 0 0) 0x7f775c118bd0 con 0x7f775c074e50 2026-03-09T20:18:23.110 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.109+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f775c118da0 con 0x7f775c074e50 2026-03-09T20:18:23.110 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.109+0000 7f77611eb640 1 -- 192.168.123.105:0/3565297751 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f775c119030 con 0x7f775c074e50 2026-03-09T20:18:23.110 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.109+0000 7f77611eb640 1 -- 192.168.123.105:0/3565297751 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f775c1a3540 con 0x7f775c074e50 2026-03-09T20:18:23.110 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.110+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f7750002e70 con 0x7f775c074e50 2026-03-09T20:18:23.110 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.110+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f7750004da0 con 0x7f775c074e50 2026-03-09T20:18:23.110 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.110+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 12) ==== 50225+0+0 (unknown 1219069973 0 0) 0x7f7750005020 con 0x7f775c074e50 2026-03-09T20:18:23.110 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.110+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 2236241792 0 0) 0x7f775004e370 con 0x7f775c074e50 2026-03-09T20:18:23.111 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.111+0000 7f77611eb640 1 -- 192.168.123.105:0/3565297751 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7728005180 con 0x7f775c074e50 2026-03-09T20:18:23.114 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.115+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f7750018a20 con 0x7f775c074e50 2026-03-09T20:18:23.248 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:23.248+0000 7f77611eb640 1 -- 192.168.123.105:0/3565297751 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}) -- 0x7f7728002bf0 con 0x7f772c03ecd0 2026-03-09T20:18:24.285 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:24.285+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2746147771 0 0) 0x7f7750017c40 con 0x7f775c074e50 2026-03-09T20:18:24.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:24 vm05 ceph-mon[51870]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:18:24.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:24 vm05 ceph-mon[51870]: Updating vm05:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.client.admin.keyring 2026-03-09T20:18:24.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:24 vm05 ceph-mon[51870]: from='client.14174 v1:192.168.123.105:0/3565297751' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:24.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:24.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:24.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:24.780 INFO:teuthology.orchestra.run.vm05.stdout:Added host 'vm09' with addr '192.168.123.109' 2026-03-09T20:18:24.780 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:24.780+0000 7f77597fa640 1 -- 192.168.123.105:0/3565297751 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+46 (unknown 0 0 530711637) 0x7f7728002bf0 con 0x7f772c03ecd0 2026-03-09T20:18:24.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:24.782+0000 7f77611eb640 1 -- 192.168.123.105:0/3565297751 >> v1:192.168.123.105:6800/3290461294 conn(0x7f772c03ecd0 legacy=0x7f772c041190 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:24.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:24.783+0000 7f77611eb640 1 -- 192.168.123.105:0/3565297751 >> v1:192.168.123.105:6789/0 conn(0x7f775c074e50 legacy=0x7f775c1184c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:24.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:24.783+0000 7f77611eb640 1 -- 192.168.123.105:0/3565297751 shutdown_connections 2026-03-09T20:18:24.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:24.783+0000 7f77611eb640 1 -- 192.168.123.105:0/3565297751 >> 192.168.123.105:0/3565297751 conn(0x7f775c06ef70 msgr2=0x7f775c10c3d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:24.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:24.783+0000 7f77611eb640 1 -- 192.168.123.105:0/3565297751 shutdown_connections 2026-03-09T20:18:24.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:24.783+0000 7f77611eb640 1 -- 192.168.123.105:0/3565297751 wait complete. 2026-03-09T20:18:25.090 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch host ls --format=json 2026-03-09T20:18:25.273 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:18:25.303 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:25 vm05 ceph-mon[51870]: Deploying cephadm binary to vm09 2026-03-09T20:18:25.303 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:25 vm05 ceph-mon[51870]: mgrmap e13: y(active, since 6s) 2026-03-09T20:18:25.303 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:25 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:25.303 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:25 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:25.303 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:25 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:25.303 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:25 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:25.411 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.411+0000 7fbdc61c8640 1 -- 192.168.123.105:0/1113341212 >> v1:192.168.123.105:6789/0 conn(0x7fbdc0102440 legacy=0x7fbdc0102820 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:25.411 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.412+0000 7fbdc61c8640 1 -- 192.168.123.105:0/1113341212 shutdown_connections 2026-03-09T20:18:25.411 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.412+0000 7fbdc61c8640 1 -- 192.168.123.105:0/1113341212 >> 192.168.123.105:0/1113341212 conn(0x7fbdc00fdec0 msgr2=0x7fbdc01002e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:25.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.412+0000 7fbdc61c8640 1 -- 192.168.123.105:0/1113341212 shutdown_connections 2026-03-09T20:18:25.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.412+0000 7fbdc61c8640 1 -- 192.168.123.105:0/1113341212 wait complete. 2026-03-09T20:18:25.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.413+0000 7fbdc61c8640 1 Processor -- start 2026-03-09T20:18:25.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.413+0000 7fbdc61c8640 1 -- start start 2026-03-09T20:18:25.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.413+0000 7fbdc61c8640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fbdc0077260 con 0x7fbdc0102440 2026-03-09T20:18:25.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.413+0000 7fbdbf7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fbdc0102440 0x7fbdc0075370 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:49570/0 (socket says 192.168.123.105:49570) 2026-03-09T20:18:25.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.413+0000 7fbdbf7fe640 1 -- 192.168.123.105:0/2970828041 learned_addr learned my addr 192.168.123.105:0/2970828041 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:25.413 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.413+0000 7fbdbcff9640 1 -- 192.168.123.105:0/2970828041 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 266628710 0 0) 0x7fbdc0077260 con 0x7fbdc0102440 2026-03-09T20:18:25.413 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.414+0000 7fbdbcff9640 1 -- 192.168.123.105:0/2970828041 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fbd94003620 con 0x7fbdc0102440 2026-03-09T20:18:25.413 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.414+0000 7fbdbcff9640 1 -- 192.168.123.105:0/2970828041 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3010810924 0 0) 0x7fbd94003620 con 0x7fbdc0102440 2026-03-09T20:18:25.413 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.414+0000 7fbdbcff9640 1 -- 192.168.123.105:0/2970828041 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fbdc0077260 con 0x7fbdc0102440 2026-03-09T20:18:25.413 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.414+0000 7fbdbcff9640 1 -- 192.168.123.105:0/2970828041 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fbdb0002cd0 con 0x7fbdc0102440 2026-03-09T20:18:25.413 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.414+0000 7fbdbcff9640 1 -- 192.168.123.105:0/2970828041 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 199727387 0 0) 0x7fbdc0077260 con 0x7fbdc0102440 2026-03-09T20:18:25.413 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.414+0000 7fbdbcff9640 1 -- 192.168.123.105:0/2970828041 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbdc0075a80 con 0x7fbdc0102440 2026-03-09T20:18:25.413 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.414+0000 7fbdc61c8640 1 -- 192.168.123.105:0/2970828041 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fbdc0075d70 con 0x7fbdc0102440 2026-03-09T20:18:25.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.414+0000 7fbdbcff9640 1 -- 192.168.123.105:0/2970828041 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fbdb0003280 con 0x7fbdc0102440 2026-03-09T20:18:25.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.414+0000 7fbdc61c8640 1 -- 192.168.123.105:0/2970828041 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fbdc01a4110 con 0x7fbdc0102440 2026-03-09T20:18:25.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.414+0000 7fbdbcff9640 1 -- 192.168.123.105:0/2970828041 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7fbdb0004f20 con 0x7fbdc0102440 2026-03-09T20:18:25.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.415+0000 7fbdc61c8640 1 -- 192.168.123.105:0/2970828041 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fbd84005180 con 0x7fbdc0102440 2026-03-09T20:18:25.419 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.418+0000 7fbdbcff9640 1 -- 192.168.123.105:0/2970828041 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2746147771 0 0) 0x7fbdb00050e0 con 0x7fbdc0102440 2026-03-09T20:18:25.419 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.418+0000 7fbdbcff9640 1 -- 192.168.123.105:0/2970828041 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 2236241792 0 0) 0x7fbdb004e1f0 con 0x7fbdc0102440 2026-03-09T20:18:25.419 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.420+0000 7fbdbcff9640 1 -- 192.168.123.105:0/2970828041 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fbdb00187c0 con 0x7fbdc0102440 2026-03-09T20:18:25.513 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.514+0000 7fbdc61c8640 1 -- 192.168.123.105:0/2970828041 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7fbd84002bf0 con 0x7fbd9403ed00 2026-03-09T20:18:25.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.515+0000 7fbdbcff9640 1 -- 192.168.123.105:0/2970828041 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+155 (unknown 0 0 870659353) 0x7fbd84002bf0 con 0x7fbd9403ed00 2026-03-09T20:18:25.514 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:18:25.514 INFO:teuthology.orchestra.run.vm05.stdout:[{"addr": "192.168.123.105", "hostname": "vm05", "labels": [], "status": ""}, {"addr": "192.168.123.109", "hostname": "vm09", "labels": [], "status": ""}] 2026-03-09T20:18:25.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.517+0000 7fbdc61c8640 1 -- 192.168.123.105:0/2970828041 >> v1:192.168.123.105:6800/3290461294 conn(0x7fbd9403ed00 legacy=0x7fbd940411c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:25.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.517+0000 7fbdc61c8640 1 -- 192.168.123.105:0/2970828041 >> v1:192.168.123.105:6789/0 conn(0x7fbdc0102440 legacy=0x7fbdc0075370 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:25.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.517+0000 7fbdc61c8640 1 -- 192.168.123.105:0/2970828041 shutdown_connections 2026-03-09T20:18:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.517+0000 7fbdc61c8640 1 -- 192.168.123.105:0/2970828041 >> 192.168.123.105:0/2970828041 conn(0x7fbdc00fdec0 msgr2=0x7fbdc00ff700 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.517+0000 7fbdc61c8640 1 -- 192.168.123.105:0/2970828041 shutdown_connections 2026-03-09T20:18:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.517+0000 7fbdc61c8640 1 -- 192.168.123.105:0/2970828041 wait complete. 2026-03-09T20:18:25.683 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T20:18:25.683 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd crush tunables default 2026-03-09T20:18:25.855 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:18:25.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.993+0000 7f8db228f640 1 -- 192.168.123.105:0/2883331467 >> v1:192.168.123.105:6789/0 conn(0x7f8dac100830 legacy=0x7f8dac100c10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:25.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.996+0000 7f8db228f640 1 -- 192.168.123.105:0/2883331467 shutdown_connections 2026-03-09T20:18:25.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.996+0000 7f8db228f640 1 -- 192.168.123.105:0/2883331467 >> 192.168.123.105:0/2883331467 conn(0x7f8dac0fc520 msgr2=0x7f8dac0fe940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:25.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.996+0000 7f8db228f640 1 -- 192.168.123.105:0/2883331467 shutdown_connections 2026-03-09T20:18:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.996+0000 7f8db228f640 1 -- 192.168.123.105:0/2883331467 wait complete. 2026-03-09T20:18:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.997+0000 7f8db228f640 1 Processor -- start 2026-03-09T20:18:25.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.997+0000 7f8db228f640 1 -- start start 2026-03-09T20:18:25.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.997+0000 7f8db228f640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8dac19c090 con 0x7f8dac100830 2026-03-09T20:18:25.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.997+0000 7f8dabfff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f8dac100830 0x7f8dac19b980 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:49592/0 (socket says 192.168.123.105:49592) 2026-03-09T20:18:25.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.997+0000 7f8dabfff640 1 -- 192.168.123.105:0/3353423706 learned_addr learned my addr 192.168.123.105:0/3353423706 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:25.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.998+0000 7f8da97fa640 1 -- 192.168.123.105:0/3353423706 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2187227599 0 0) 0x7f8dac19c090 con 0x7f8dac100830 2026-03-09T20:18:25.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.998+0000 7f8da97fa640 1 -- 192.168.123.105:0/3353423706 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8d88003620 con 0x7f8dac100830 2026-03-09T20:18:25.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.998+0000 7f8da97fa640 1 -- 192.168.123.105:0/3353423706 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 763869651 0 0) 0x7f8d88003620 con 0x7f8dac100830 2026-03-09T20:18:25.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.998+0000 7f8da97fa640 1 -- 192.168.123.105:0/3353423706 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f8dac19c090 con 0x7f8dac100830 2026-03-09T20:18:25.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.998+0000 7f8da97fa640 1 -- 192.168.123.105:0/3353423706 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f8d98004450 con 0x7f8dac100830 2026-03-09T20:18:25.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.998+0000 7f8da97fa640 1 -- 192.168.123.105:0/3353423706 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 907775412 0 0) 0x7f8dac19c090 con 0x7f8dac100830 2026-03-09T20:18:25.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.998+0000 7f8da97fa640 1 -- 192.168.123.105:0/3353423706 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8dac19c260 con 0x7f8dac100830 2026-03-09T20:18:25.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.998+0000 7f8da97fa640 1 -- 192.168.123.105:0/3353423706 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f8d980032e0 con 0x7f8dac100830 2026-03-09T20:18:25.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.998+0000 7f8da97fa640 1 -- 192.168.123.105:0/3353423706 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f8d98004ee0 con 0x7f8dac100830 2026-03-09T20:18:25.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.999+0000 7f8db228f640 1 -- 192.168.123.105:0/3353423706 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f8dac19f690 con 0x7f8dac100830 2026-03-09T20:18:26.002 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:25.999+0000 7f8db228f640 1 -- 192.168.123.105:0/3353423706 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f8dac19fb50 con 0x7f8dac100830 2026-03-09T20:18:26.002 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:26.000+0000 7f8da97fa640 1 -- 192.168.123.105:0/3353423706 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2746147771 0 0) 0x7f8d98003dd0 con 0x7f8dac100830 2026-03-09T20:18:26.002 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:26.000+0000 7f8da97fa640 1 -- 192.168.123.105:0/3353423706 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 2236241792 0 0) 0x7f8d9804d3f0 con 0x7f8dac100830 2026-03-09T20:18:26.002 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:26.000+0000 7f8db228f640 1 -- 192.168.123.105:0/3353423706 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8d7c005180 con 0x7f8dac100830 2026-03-09T20:18:26.002 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:26.003+0000 7f8da97fa640 1 -- 192.168.123.105:0/3353423706 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f8d980179c0 con 0x7f8dac100830 2026-03-09T20:18:26.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:26.104+0000 7f8db228f640 1 -- 192.168.123.105:0/3353423706 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd crush tunables", "profile": "default"} v 0) -- 0x7f8d7c005470 con 0x7f8dac100830 2026-03-09T20:18:26.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:26 vm05 ceph-mon[51870]: Added host vm09 2026-03-09T20:18:26.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:26 vm05 ceph-mon[51870]: from='client.14176 v1:192.168.123.105:0/2970828041' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:18:26.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:26 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:26.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3353423706' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T20:18:26.979 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:26.979+0000 7f8da97fa640 1 -- 192.168.123.105:0/3353423706 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd crush tunables", "profile": "default"}]=0 adjusted tunables profile to default v4) ==== 124+0+0 (unknown 3126668360 0 0) 0x7f8d98020490 con 0x7f8dac100830 2026-03-09T20:18:26.979 INFO:teuthology.orchestra.run.vm05.stderr:adjusted tunables profile to default 2026-03-09T20:18:26.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:26.981+0000 7f8db228f640 1 -- 192.168.123.105:0/3353423706 >> v1:192.168.123.105:6800/3290461294 conn(0x7f8d8803ecb0 legacy=0x7f8d88041170 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:26.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:26.981+0000 7f8db228f640 1 -- 192.168.123.105:0/3353423706 >> v1:192.168.123.105:6789/0 conn(0x7f8dac100830 legacy=0x7f8dac19b980 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:26.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:26.982+0000 7f8db228f640 1 -- 192.168.123.105:0/3353423706 shutdown_connections 2026-03-09T20:18:26.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:26.982+0000 7f8db228f640 1 -- 192.168.123.105:0/3353423706 >> 192.168.123.105:0/3353423706 conn(0x7f8dac0fc520 msgr2=0x7f8dac101c90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:26.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:26.982+0000 7f8db228f640 1 -- 192.168.123.105:0/3353423706 shutdown_connections 2026-03-09T20:18:26.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:26.982+0000 7f8db228f640 1 -- 192.168.123.105:0/3353423706 wait complete. 2026-03-09T20:18:27.354 INFO:tasks.cephadm:Adding mon.a on vm05 2026-03-09T20:18:27.354 INFO:tasks.cephadm:Adding mon.c on vm05 2026-03-09T20:18:27.354 INFO:tasks.cephadm:Adding mon.b on vm09 2026-03-09T20:18:27.354 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch apply mon '3;vm05:[v1:192.168.123.105:6789]=a;vm05:[v1:192.168.123.105:6790]=c;vm09:[v1:192.168.123.109:6789]=b' 2026-03-09T20:18:27.527 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:18:27.575 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T20:18:27.862 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.861+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3204337857 >> v1:192.168.123.105:6789/0 conn(0x7f2a941044d0 legacy=0x7f2a941048d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:27.862 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.862+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3204337857 shutdown_connections 2026-03-09T20:18:27.862 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.862+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3204337857 >> 192.168.123.109:0/3204337857 conn(0x7f2a940ffd70 msgr2=0x7f2a94102160 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:27.862 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.862+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3204337857 shutdown_connections 2026-03-09T20:18:27.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.862+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3204337857 wait complete. 2026-03-09T20:18:27.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.863+0000 7f2a9b7e0640 1 Processor -- start 2026-03-09T20:18:27.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.863+0000 7f2a9b7e0640 1 -- start start 2026-03-09T20:18:27.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.863+0000 7f2a9b7e0640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f2a9419b540 con 0x7f2a941044d0 2026-03-09T20:18:27.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.863+0000 7f2a99555640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f2a941044d0 0x7f2a9419ae30 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:43958/0 (socket says 192.168.123.109:43958) 2026-03-09T20:18:27.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.863+0000 7f2a99555640 1 -- 192.168.123.109:0/3903292022 learned_addr learned my addr 192.168.123.109:0/3903292022 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:18:27.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.863+0000 7f2a8a7fc640 1 -- 192.168.123.109:0/3903292022 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1200836218 0 0) 0x7f2a9419b540 con 0x7f2a941044d0 2026-03-09T20:18:27.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.863+0000 7f2a8a7fc640 1 -- 192.168.123.109:0/3903292022 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2a64003620 con 0x7f2a941044d0 2026-03-09T20:18:27.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.864+0000 7f2a8a7fc640 1 -- 192.168.123.109:0/3903292022 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1974219186 0 0) 0x7f2a64003620 con 0x7f2a941044d0 2026-03-09T20:18:27.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.864+0000 7f2a8a7fc640 1 -- 192.168.123.109:0/3903292022 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f2a9419b540 con 0x7f2a941044d0 2026-03-09T20:18:27.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.864+0000 7f2a8a7fc640 1 -- 192.168.123.109:0/3903292022 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f2a7c002d10 con 0x7f2a941044d0 2026-03-09T20:18:27.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.864+0000 7f2a8a7fc640 1 -- 192.168.123.109:0/3903292022 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2395735245 0 0) 0x7f2a9419b540 con 0x7f2a941044d0 2026-03-09T20:18:27.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.864+0000 7f2a8a7fc640 1 -- 192.168.123.109:0/3903292022 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2a9419b710 con 0x7f2a941044d0 2026-03-09T20:18:27.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.864+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3903292022 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f2a9419ba20 con 0x7f2a941044d0 2026-03-09T20:18:27.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.864+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3903292022 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f2a9419f5b0 con 0x7f2a941044d0 2026-03-09T20:18:27.865 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.864+0000 7f2a8a7fc640 1 -- 192.168.123.109:0/3903292022 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f2a7c0034c0 con 0x7f2a941044d0 2026-03-09T20:18:27.865 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.864+0000 7f2a8a7fc640 1 -- 192.168.123.109:0/3903292022 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f2a7c004c90 con 0x7f2a941044d0 2026-03-09T20:18:27.865 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.865+0000 7f2a8a7fc640 1 -- 192.168.123.109:0/3903292022 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2746147771 0 0) 0x7f2a7c004eb0 con 0x7f2a941044d0 2026-03-09T20:18:27.865 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.865+0000 7f2a8a7fc640 1 -- 192.168.123.109:0/3903292022 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 2499184964 0 0) 0x7f2a7c04e3e0 con 0x7f2a941044d0 2026-03-09T20:18:27.866 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.865+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3903292022 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2a5c005180 con 0x7f2a941044d0 2026-03-09T20:18:27.870 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.870+0000 7f2a8a7fc640 1 -- 192.168.123.109:0/3903292022 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f2a7c018a10 con 0x7f2a941044d0 2026-03-09T20:18:27.980 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:27.979+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3903292022 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "placement": "3;vm05:[v1:192.168.123.105:6789]=a;vm05:[v1:192.168.123.105:6790]=c;vm09:[v1:192.168.123.109:6789]=b", "target": ["mon-mgr", ""]}) -- 0x7f2a5c002cc0 con 0x7f2a6403ec60 2026-03-09T20:18:28.165 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mon update... 2026-03-09T20:18:28.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.165+0000 7f2a8a7fc640 1 -- 192.168.123.109:0/3903292022 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (unknown 0 0 3265049985) 0x7f2a5c002cc0 con 0x7f2a6403ec60 2026-03-09T20:18:28.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.167+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3903292022 >> v1:192.168.123.105:6800/3290461294 conn(0x7f2a6403ec60 legacy=0x7f2a64041120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:28.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.167+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3903292022 >> v1:192.168.123.105:6789/0 conn(0x7f2a941044d0 legacy=0x7f2a9419ae30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:28.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.167+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3903292022 shutdown_connections 2026-03-09T20:18:28.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.167+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3903292022 >> 192.168.123.109:0/3903292022 conn(0x7f2a940ffd70 msgr2=0x7f2a94102460 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:28.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.167+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3903292022 shutdown_connections 2026-03-09T20:18:28.174 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.167+0000 7f2a9b7e0640 1 -- 192.168.123.109:0/3903292022 wait complete. 2026-03-09T20:18:28.355 DEBUG:teuthology.orchestra.run.vm05:mon.c> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.c.service 2026-03-09T20:18:28.356 DEBUG:teuthology.orchestra.run.vm09:mon.b> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.b.service 2026-03-09T20:18:28.358 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T20:18:28.358 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph mon dump -f json 2026-03-09T20:18:28.379 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3353423706' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T20:18:28.379 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:28 vm05 ceph-mon[51870]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:18:28.582 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:18:28.715 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.714+0000 7f48a3fff640 1 -- 192.168.123.109:0/1236115183 >> v1:192.168.123.105:6789/0 conn(0x7f48a4074e50 legacy=0x7f48a4073370 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:28.716 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.715+0000 7f48a3fff640 1 -- 192.168.123.109:0/1236115183 shutdown_connections 2026-03-09T20:18:28.716 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.715+0000 7f48a3fff640 1 -- 192.168.123.109:0/1236115183 >> 192.168.123.109:0/1236115183 conn(0x7f48a406ef70 msgr2=0x7f48a40713b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:28.716 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.716+0000 7f48a3fff640 1 -- 192.168.123.109:0/1236115183 shutdown_connections 2026-03-09T20:18:28.716 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.716+0000 7f48a3fff640 1 -- 192.168.123.109:0/1236115183 wait complete. 2026-03-09T20:18:28.717 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.716+0000 7f48a3fff640 1 Processor -- start 2026-03-09T20:18:28.717 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.716+0000 7f48a3fff640 1 -- start start 2026-03-09T20:18:28.717 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.716+0000 7f48a3fff640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f48a41a82e0 con 0x7f48a4074e50 2026-03-09T20:18:28.717 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.716+0000 7f48a2ffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f48a4074e50 0x7f48a41a7bd0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:43982/0 (socket says 192.168.123.109:43982) 2026-03-09T20:18:28.717 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.716+0000 7f48a2ffd640 1 -- 192.168.123.109:0/3306492740 learned_addr learned my addr 192.168.123.109:0/3306492740 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:18:28.717 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.717+0000 7f4883fff640 1 -- 192.168.123.109:0/3306492740 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3665097631 0 0) 0x7f48a41a82e0 con 0x7f48a4074e50 2026-03-09T20:18:28.718 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.717+0000 7f4883fff640 1 -- 192.168.123.109:0/3306492740 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4878003620 con 0x7f48a4074e50 2026-03-09T20:18:28.718 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.717+0000 7f4883fff640 1 -- 192.168.123.109:0/3306492740 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 744523890 0 0) 0x7f4878003620 con 0x7f48a4074e50 2026-03-09T20:18:28.718 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.717+0000 7f4883fff640 1 -- 192.168.123.109:0/3306492740 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f48a41a82e0 con 0x7f48a4074e50 2026-03-09T20:18:28.718 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.717+0000 7f4883fff640 1 -- 192.168.123.109:0/3306492740 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f4894003170 con 0x7f48a4074e50 2026-03-09T20:18:28.719 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.718+0000 7f4883fff640 1 -- 192.168.123.109:0/3306492740 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 437562805 0 0) 0x7f48a41a82e0 con 0x7f48a4074e50 2026-03-09T20:18:28.719 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.718+0000 7f4883fff640 1 -- 192.168.123.109:0/3306492740 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f48a41a84b0 con 0x7f48a4074e50 2026-03-09T20:18:28.720 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.718+0000 7f48a3fff640 1 -- 192.168.123.109:0/3306492740 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f48a41a87c0 con 0x7f48a4074e50 2026-03-09T20:18:28.720 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.718+0000 7f48a3fff640 1 -- 192.168.123.109:0/3306492740 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f48a41ac350 con 0x7f48a4074e50 2026-03-09T20:18:28.720 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.718+0000 7f4883fff640 1 -- 192.168.123.109:0/3306492740 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f4894003400 con 0x7f48a4074e50 2026-03-09T20:18:28.720 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.718+0000 7f4883fff640 1 -- 192.168.123.109:0/3306492740 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f4894004cc0 con 0x7f48a4074e50 2026-03-09T20:18:28.720 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.718+0000 7f48a3fff640 1 -- 192.168.123.109:0/3306492740 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f48a410ed70 con 0x7f48a4074e50 2026-03-09T20:18:28.720 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.719+0000 7f4883fff640 1 -- 192.168.123.109:0/3306492740 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2746147771 0 0) 0x7f4894003ab0 con 0x7f48a4074e50 2026-03-09T20:18:28.720 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.719+0000 7f4883fff640 1 -- 192.168.123.109:0/3306492740 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 2499184964 0 0) 0x7f489404d360 con 0x7f48a4074e50 2026-03-09T20:18:28.722 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.722+0000 7f4883fff640 1 -- 192.168.123.109:0/3306492740 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f4894017990 con 0x7f48a4074e50 2026-03-09T20:18:28.867 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.867+0000 7f48a3fff640 1 -- 192.168.123.109:0/3306492740 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7f48a410efd0 con 0x7f48a4074e50 2026-03-09T20:18:28.868 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.867+0000 7f4883fff640 1 -- 192.168.123.109:0/3306492740 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) ==== 95+0+699 (unknown 2237029548 0 653155915) 0x7f4894004420 con 0x7f48a4074e50 2026-03-09T20:18:28.868 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:18:28.868 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"c0151936-1bf4-11f1-b896-23f7bea8a6ea","modified":"2026-03-09T20:17:53.169307Z","created":"2026-03-09T20:17:53.169307Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T20:18:28.870 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T20:18:28.870 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.870+0000 7f48a3fff640 1 -- 192.168.123.109:0/3306492740 >> v1:192.168.123.105:6800/3290461294 conn(0x7f487803ec60 legacy=0x7f4878041120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:28.870 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.870+0000 7f48a3fff640 1 -- 192.168.123.109:0/3306492740 >> v1:192.168.123.105:6789/0 conn(0x7f48a4074e50 legacy=0x7f48a41a7bd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:28.870 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.870+0000 7f48a3fff640 1 -- 192.168.123.109:0/3306492740 shutdown_connections 2026-03-09T20:18:28.870 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.870+0000 7f48a3fff640 1 -- 192.168.123.109:0/3306492740 >> 192.168.123.109:0/3306492740 conn(0x7f48a406ef70 msgr2=0x7f48a410c3c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:28.870 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.870+0000 7f48a3fff640 1 -- 192.168.123.109:0/3306492740 shutdown_connections 2026-03-09T20:18:28.871 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:28.870+0000 7f48a3fff640 1 -- 192.168.123.109:0/3306492740 wait complete. 2026-03-09T20:18:29.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='client.14180 v1:192.168.123.109:0/3903292022' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm05:[v1:192.168.123.105:6789]=a;vm05:[v1:192.168.123.105:6790]=c;vm09:[v1:192.168.123.109:6789]=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:29.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: Saving service mon spec with placement vm05:[v1:192.168.123.105:6789]=a;vm05:[v1:192.168.123.105:6790]=c;vm09:[v1:192.168.123.109:6789]=b;count:3 2026-03-09T20:18:29.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:29.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:29.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:29.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:29.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:18:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T20:18:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: Updating vm09:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:18:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:18:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:18:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/3306492740' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:18:30.042 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T20:18:30.042 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph mon dump -f json 2026-03-09T20:18:30.263 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:18:30.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:30 vm05 ceph-mon[51870]: Updating vm09:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.client.admin.keyring 2026-03-09T20:18:30.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:30 vm05 ceph-mon[51870]: Deploying daemon mon.b on vm09 2026-03-09T20:18:30.414 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.413+0000 7f08b078c640 1 -- 192.168.123.109:0/4038444360 >> v1:192.168.123.105:6789/0 conn(0x7f08a8073b90 legacy=0x7f08a8073f70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:30.414 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.414+0000 7f08b078c640 1 -- 192.168.123.109:0/4038444360 shutdown_connections 2026-03-09T20:18:30.414 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.414+0000 7f08b078c640 1 -- 192.168.123.109:0/4038444360 >> 192.168.123.109:0/4038444360 conn(0x7f08a806d390 msgr2=0x7f08a806d7a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:30.414 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.414+0000 7f08b078c640 1 -- 192.168.123.109:0/4038444360 shutdown_connections 2026-03-09T20:18:30.415 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.414+0000 7f08b078c640 1 -- 192.168.123.109:0/4038444360 wait complete. 2026-03-09T20:18:30.415 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.415+0000 7f08b078c640 1 Processor -- start 2026-03-09T20:18:30.415 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.415+0000 7f08b078c640 1 -- start start 2026-03-09T20:18:30.415 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.415+0000 7f08b078c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f08a81a87e0 con 0x7f08a8073b90 2026-03-09T20:18:30.415 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.415+0000 7f08ae501640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f08a8073b90 0x7f08a81a80d0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:43990/0 (socket says 192.168.123.109:43990) 2026-03-09T20:18:30.415 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.415+0000 7f08ae501640 1 -- 192.168.123.109:0/3254033658 learned_addr learned my addr 192.168.123.109:0/3254033658 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:18:30.415 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.415+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3072139349 0 0) 0x7f08a81a87e0 con 0x7f08a8073b90 2026-03-09T20:18:30.416 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.415+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0888003620 con 0x7f08a8073b90 2026-03-09T20:18:30.416 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.416+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2630913123 0 0) 0x7f0888003620 con 0x7f08a8073b90 2026-03-09T20:18:30.416 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.416+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f08a81a87e0 con 0x7f08a8073b90 2026-03-09T20:18:30.416 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.416+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f0898002890 con 0x7f08a8073b90 2026-03-09T20:18:30.416 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.416+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2833676065 0 0) 0x7f08a81a87e0 con 0x7f08a8073b90 2026-03-09T20:18:30.416 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.416+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f08a81a89b0 con 0x7f08a8073b90 2026-03-09T20:18:30.416 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.416+0000 7f08b078c640 1 -- 192.168.123.109:0/3254033658 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f08a81a8ca0 con 0x7f08a8073b90 2026-03-09T20:18:30.416 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.416+0000 7f08b078c640 1 -- 192.168.123.109:0/3254033658 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f08a81b0e80 con 0x7f08a8073b90 2026-03-09T20:18:30.417 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.416+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f08980042d0 con 0x7f08a8073b90 2026-03-09T20:18:30.417 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.416+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 2242798433 0 0) 0x7f0898004db0 con 0x7f08a8073b90 2026-03-09T20:18:30.417 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.417+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2746147771 0 0) 0x7f0898005030 con 0x7f08a8073b90 2026-03-09T20:18:30.417 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.417+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 2499184964 0 0) 0x7f0898002f10 con 0x7f08a8073b90 2026-03-09T20:18:30.417 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.417+0000 7f08b078c640 1 -- 192.168.123.109:0/3254033658 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0878005180 con 0x7f08a8073b90 2026-03-09T20:18:30.423 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.423+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f0898018990 con 0x7f08a8073b90 2026-03-09T20:18:30.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.473+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_map magic: 0 ==== 239+0+0 (unknown 1104606392 0 0) 0x7f0898018060 con 0x7f08a8073b90 2026-03-09T20:18:30.576 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:30.575+0000 7f08b078c640 1 -- 192.168.123.109:0/3254033658 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7f0878005470 con 0x7f08a8073b90 2026-03-09T20:18:31.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 podman[61309]: 2026-03-09 20:18:31.484091619 +0000 UTC m=+0.017072941 container create acf150ca4348c3c2159aeeeab35b2fb50f3582820bea8096a350877217b89a63 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-c, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223) 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 podman[61309]: 2026-03-09 20:18:31.52543921 +0000 UTC m=+0.058420551 container init acf150ca4348c3c2159aeeeab35b2fb50f3582820bea8096a350877217b89a63 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 podman[61309]: 2026-03-09 20:18:31.53049288 +0000 UTC m=+0.063474211 container start acf150ca4348c3c2159aeeeab35b2fb50f3582820bea8096a350877217b89a63 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-c, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid) 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 bash[61309]: acf150ca4348c3c2159aeeeab35b2fb50f3582820bea8096a350877217b89a63 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 podman[61309]: 2026-03-09 20:18:31.477206652 +0000 UTC m=+0.010187993 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 systemd[1]: Started Ceph mon.c for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: set uid:gid to 167:167 (ceph:ceph) 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 6 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: pidfile_write: ignore empty --pid-file 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: load: jerasure load: lrc 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: RocksDB version: 7.9.2 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Git sha 0 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: DB SUMMARY 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: DB Session ID: 1N1VYC6JIZB1I7TOMX1Q 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: CURRENT file: CURRENT 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: IDENTITY file: IDENTITY 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 0, files: 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000004.log size: 476 ; 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.error_if_exists: 0 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.create_if_missing: 0 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.paranoid_checks: 1 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.env: 0x55cb6644adc0 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.fs: PosixFileSystem 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.info_log: 0x55cb686965c0 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_file_opening_threads: 16 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.statistics: (nil) 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.use_fsync: 0 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_log_file_size: 0 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.keep_log_file_num: 1000 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.recycle_log_file_num: 0 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.allow_fallocate: 1 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.allow_mmap_reads: 0 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.allow_mmap_writes: 0 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.use_direct_reads: 0 2026-03-09T20:18:31.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.create_missing_column_families: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.db_log_dir: 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.wal_dir: 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.advise_random_on_open: 1 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.db_write_buffer_size: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.write_buffer_manager: 0x55cb6869b900 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.rate_limiter: (nil) 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.wal_recovery_mode: 2 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.enable_thread_tracking: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.enable_pipelined_write: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.unordered_write: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.row_cache: None 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.wal_filter: None 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.allow_ingest_behind: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.two_write_queues: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.manual_wal_flush: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.wal_compression: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.atomic_flush: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.log_readahead_size: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.best_efforts_recovery: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.allow_data_in_errors: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.db_host_id: __hostname__ 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_background_jobs: 2 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_background_compactions: -1 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_subcompactions: 1 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_total_wal_size: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_open_files: -1 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.bytes_per_sync: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compaction_readahead_size: 0 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_background_flushes: -1 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Compression algorithms supported: 2026-03-09T20:18:31.913 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: kZSTD supported: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: kXpressCompression supported: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: kBZip2Compression supported: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: kLZ4Compression supported: 1 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: kZlibCompression supported: 1 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: kLZ4HCCompression supported: 1 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: kSnappyCompression supported: 1 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.merge_operator: 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compaction_filter: None 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compaction_filter_factory: None 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.sst_partitioner_factory: None 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cb686965a0) 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: cache_index_and_filter_blocks: 1 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: pin_top_level_index_and_filter: 1 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: index_type: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: data_block_index_type: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: index_shortening: 1 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: checksum: 4 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: no_block_cache: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: block_cache: 0x55cb686bb350 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: block_cache_name: BinnedLRUCache 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: block_cache_options: 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: capacity : 536870912 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: num_shard_bits : 4 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: strict_capacity_limit : 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: high_pri_pool_ratio: 0.000 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: block_cache_compressed: (nil) 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: persistent_cache: (nil) 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: block_size: 4096 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: block_size_deviation: 10 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: block_restart_interval: 16 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: index_block_restart_interval: 1 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: metadata_block_size: 4096 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: partition_filters: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: use_delta_encoding: 1 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: filter_policy: bloomfilter 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: whole_key_filtering: 1 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: verify_compression: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: read_amp_bytes_per_bit: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: format_version: 5 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: enable_index_compression: 1 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: block_align: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: max_auto_readahead_size: 262144 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: prepopulate_block_cache: 0 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: initial_auto_readahead_size: 8192 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout: num_file_reads_for_auto_readahead: 2 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.write_buffer_size: 33554432 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_write_buffer_number: 2 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compression: NoCompression 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.bottommost_compression: Disabled 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.prefix_extractor: nullptr 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T20:18:31.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.num_levels: 7 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compression_opts.level: 32767 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compression_opts.strategy: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compression_opts.enabled: false 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.target_file_size_base: 67108864 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.arena_block_size: 1048576 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.disable_auto_compactions: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.inplace_update_support: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.bloom_locality: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.max_successive_merges: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.paranoid_file_checks: 0 2026-03-09T20:18:31.915 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.force_consistency_checks: 1 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.report_bg_io_stats: 0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.ttl: 2592000 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.enable_blob_files: false 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.min_blob_size: 0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.blob_file_size: 268435456 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.blob_file_starting_level: 0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bd5e2acb-34c3-4098-a6f4-366089c1d0f0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773087511558503, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773087511559203, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 488, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 366, "raw_average_value_size": 73, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773087511, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd5e2acb-34c3-4098-a6f4-366089c1d0f0", "db_session_id": "1N1VYC6JIZB1I7TOMX1Q", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773087511559310, "job": 1, "event": "recovery_finished"} 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55cb686bce00 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: DB pointer 0x55cb687d6000 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: ** DB Stats ** 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: ** Compaction Stats [default] ** 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: L0 1/0 1.57 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.3 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Sum 1/0 1.57 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.3 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.3 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: ** Compaction Stats [default] ** 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.3 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Cumulative compaction: 0.00 GB write, 0.25 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Interval compaction: 0.00 GB write, 0.25 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Block cache BinnedLRUCache@0x55cb686bb350#6 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-06 secs_since: 0 2026-03-09T20:18:31.916 INFO:journalctl@ceph.mon.c.vm05.stdout: Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout: 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c does not exist in monmap, will attempt to join an existing cluster 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: using public_addrv v1:192.168.123.105:6790/0 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: starting mon.c rank -1 at public addrs v1:192.168.123.105:6790/0 at bind addrs v1:192.168.123.105:6790/0 mon_data /var/lib/ceph/mon/ceph-c fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(???) e0 preinit fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(synchronizing).mds e1 new map 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(synchronizing).mds e1 print_map 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout: e1 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout: btime 2026-03-09T20:17:54:448734+0000 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout: legacy client fscid: -1 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout: 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout: No filesystems configured 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mkfs c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: monmap epoch 1 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: last_changed 2026-03-09T20:17:53.169307+0000 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: created 2026-03-09T20:17:53.169307+0000 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: min_mon_release 19 (squid) 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: election_strategy: 1 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: fsmap 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: osdmap e1: 0 total, 0 up, 0 in 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e1: no daemons active 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/582173131' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/994014221' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/994014221' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2308763832' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: monmap epoch 1 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: last_changed 2026-03-09T20:17:53.169307+0000 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: created 2026-03-09T20:17:53.169307+0000 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: min_mon_release 19 (squid) 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: election_strategy: 1 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: fsmap 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: osdmap e1: 0 total, 0 up, 0 in 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e1: no daemons active 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2549523660' entity='client.admin' 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1411401586' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1939451897' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Activating manager daemon y 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e2: y(active, starting, since 0.00431275s) 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T20:18:31.917 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Manager daemon y is now available 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14100 v1:192.168.123.105:0/1504054359' entity='mgr.y' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e3: y(active, since 1.00873s) 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/393516629' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e4: y(active, since 2s) 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/540394090' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/243839898' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/243839898' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e5: y(active, since 3s) 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1745205018' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Active manager daemon y restarted 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Activating manager daemon y 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: osdmap e2: 0 total, 0 up, 0 in 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e6: y(active, starting, since 0.0406857s) 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Manager daemon y is now available 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Found migration_current of "None". Setting to last migration. 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e7: y(active, since 1.26275s) 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14122 v1:192.168.123.105:0/1517109885' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14122 v1:192.168.123.105:0/1517109885' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14130 v1:192.168.123.105:0/2258206063' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: [09/Mar/2026:20:18:08] ENGINE Bus STARTING 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: [09/Mar/2026:20:18:08] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: [09/Mar/2026:20:18:08] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: [09/Mar/2026:20:18:08] ENGINE Bus STARTED 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: [09/Mar/2026:20:18:08] ENGINE Client ('192.168.123.105', 53636) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14132 v1:192.168.123.105:0/3869147543' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14134 v1:192.168.123.105:0/4106406019' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Generating ssh key... 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e8: y(active, since 2s) 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14136 v1:192.168.123.105:0/1411171583' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14138 v1:192.168.123.105:0/4172123238' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "addr": "192.168.123.105", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Deploying cephadm binary to vm05 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Added host vm05 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14140 v1:192.168.123.105:0/1373650697' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Saving service mon spec with placement count:5 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1096563322' entity='client.admin' 2026-03-09T20:18:31.918 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14142 v1:192.168.123.105:0/1148759715' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Saving service mgr spec with placement count:2 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2619650912' entity='client.admin' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3946134565' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14118 v1:192.168.123.105:0/2513524342' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3946134565' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e9: y(active, since 8s) 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/41370987' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Active manager daemon y restarted 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Activating manager daemon y 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: osdmap e3: 0 total, 0 up, 0 in 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e10: y(active, starting, since 0.00611163s) 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Manager daemon y is now available 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: [09/Mar/2026:20:18:18] ENGINE Bus STARTING 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: [09/Mar/2026:20:18:18] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: [09/Mar/2026:20:18:18] ENGINE Client ('192.168.123.105', 60484) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e11: y(active, since 1.00898s) 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14154 v1:192.168.123.105:0/3644978669' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14154 v1:192.168.123.105:0/3644978669' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: [09/Mar/2026:20:18:18] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: [09/Mar/2026:20:18:18] ENGINE Bus STARTED 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14162 v1:192.168.123.105:0/203818805' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14164 v1:192.168.123.105:0/402688010' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e12: y(active, since 2s) 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/586679507' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2481448012' entity='client.admin' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2124429187' entity='client.admin' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14172 v1:192.168.123.105:0/2398881420' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Updating vm05:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Updating vm05:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.client.admin.keyring 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14174 v1:192.168.123.105:0/3565297751' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Deploying cephadm binary to vm09 2026-03-09T20:18:31.919 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mgrmap e13: y(active, since 6s) 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Added host vm09 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14176 v1:192.168.123.105:0/2970828041' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3353423706' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3353423706' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.14180 v1:192.168.123.109:0/3903292022' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm05:[v1:192.168.123.105:6789]=a;vm05:[v1:192.168.123.105:6790]=c;vm09:[v1:192.168.123.109:6789]=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Saving service mon spec with placement vm05:[v1:192.168.123.105:6789]=a;vm05:[v1:192.168.123.105:6790]=c;vm09:[v1:192.168.123.109:6789]=b;count:3 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Updating vm09:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/3306492740' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Updating vm09:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.client.admin.keyring 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: Deploying daemon mon.b on vm09 2026-03-09T20:18:31.920 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:31 vm05 ceph-mon[61345]: mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T20:18:35.499 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:35.498+0000 7f08977fe640 1 -- 192.168.123.109:0/3254033658 <== mon.0 v1:192.168.123.105:6789/0 11 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 2 v2) ==== 95+0+923 (unknown 3557084514 0 2055850134) 0x7f0898021460 con 0x7f08a8073b90 2026-03-09T20:18:35.499 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:18:35.499 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":2,"fsid":"c0151936-1bf4-11f1-b896-23f7bea8a6ea","modified":"2026-03-09T20:18:30.471526Z","created":"2026-03-09T20:17:53.169307Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T20:18:35.499 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 2 2026-03-09T20:18:35.501 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:35.501+0000 7f08b078c640 1 -- 192.168.123.109:0/3254033658 >> v1:192.168.123.105:6800/3290461294 conn(0x7f088803ec60 legacy=0x7f0888041120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:35.501 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:35.501+0000 7f08b078c640 1 -- 192.168.123.109:0/3254033658 >> v1:192.168.123.105:6789/0 conn(0x7f08a8073b90 legacy=0x7f08a81a80d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:35.502 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:35.501+0000 7f08b078c640 1 -- 192.168.123.109:0/3254033658 shutdown_connections 2026-03-09T20:18:35.502 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:35.501+0000 7f08b078c640 1 -- 192.168.123.109:0/3254033658 >> 192.168.123.109:0/3254033658 conn(0x7f08a806d390 msgr2=0x7f08a80727c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:35.502 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:35.501+0000 7f08b078c640 1 -- 192.168.123.109:0/3254033658 shutdown_connections 2026-03-09T20:18:35.502 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:35.501+0000 7f08b078c640 1 -- 192.168.123.109:0/3254033658 wait complete. 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: Deploying daemon mon.c on vm05 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: mon.a calling monitor election 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/3254033658' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: mon.b calling monitor election 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: monmap epoch 2 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: last_changed 2026-03-09T20:18:30.471526+0000 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: created 2026-03-09T20:17:53.169307+0000 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: min_mon_release 19 (squid) 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: election_strategy: 1 2026-03-09T20:18:35.829 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-09T20:18:35.830 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: 1: v1:192.168.123.109:6789/0 mon.b 2026-03-09T20:18:35.830 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: fsmap 2026-03-09T20:18:35.830 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:18:35.830 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: mgrmap e13: y(active, since 18s) 2026-03-09T20:18:35.830 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: overall HEALTH_OK 2026-03-09T20:18:35.830 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:35.830 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:35.830 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:35.830 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:36.660 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T20:18:36.660 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph mon dump -f json 2026-03-09T20:18:36.813 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:18:36.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:36 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:36.470+0000 7fcab79d4640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-09T20:18:40.598 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.597+0000 7f67b7d0d640 1 -- 192.168.123.109:0/2393566063 >> v1:192.168.123.105:6789/0 conn(0x7f6790003660 legacy=0x7f6790005af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:40.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.597+0000 7f67b7d0d640 1 -- 192.168.123.109:0/2393566063 shutdown_connections 2026-03-09T20:18:40.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.597+0000 7f67b7d0d640 1 -- 192.168.123.109:0/2393566063 >> 192.168.123.109:0/2393566063 conn(0x7f67b0100250 msgr2=0x7f67b0102670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:40.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.597+0000 7f67b7d0d640 1 -- 192.168.123.109:0/2393566063 shutdown_connections 2026-03-09T20:18:40.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.598+0000 7f67b7d0d640 1 -- 192.168.123.109:0/2393566063 wait complete. 2026-03-09T20:18:40.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.598+0000 7f67b7d0d640 1 Processor -- start 2026-03-09T20:18:40.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.598+0000 7f67b7d0d640 1 -- start start 2026-03-09T20:18:40.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.599+0000 7f67b7d0d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f67b01b8190 con 0x7f67b01a90e0 2026-03-09T20:18:40.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.599+0000 7f67b7d0d640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f67b01b9390 con 0x7f67b01b4460 2026-03-09T20:18:40.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.599+0000 7f67b7d0d640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f67b01ba590 con 0x7f6790003660 2026-03-09T20:18:40.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.599+0000 7f67b5281640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f67b01a90e0 0x7f67b01b2d40 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:50748/0 (socket says 192.168.123.109:50748) 2026-03-09T20:18:40.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.599+0000 7f67b5281640 1 -- 192.168.123.109:0/3542784716 learned_addr learned my addr 192.168.123.109:0/3542784716 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:18:40.600 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.599+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1731640420 0 0) 0x7f67b01b8190 con 0x7f67b01a90e0 2026-03-09T20:18:40.600 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.600+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6798003620 con 0x7f67b01a90e0 2026-03-09T20:18:40.601 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.601+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3090287226 0 0) 0x7f6798003620 con 0x7f67b01a90e0 2026-03-09T20:18:40.601 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.601+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f67b01b8190 con 0x7f67b01a90e0 2026-03-09T20:18:40.601 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.601+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f67a00030c0 con 0x7f67b01a90e0 2026-03-09T20:18:40.602 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.601+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1917944993 0 0) 0x7f67b01b8190 con 0x7f67b01a90e0 2026-03-09T20:18:40.602 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.601+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 >> v1:192.168.123.105:6790/0 conn(0x7f6790003660 legacy=0x7f67b01a89d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:40.602 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.601+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 >> v1:192.168.123.109:6789/0 conn(0x7f67b01b4460 legacy=0x7f67b01b6890 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:40.602 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.601+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f67b01bb790 con 0x7f67b01a90e0 2026-03-09T20:18:40.602 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.602+0000 7f67b7d0d640 1 -- 192.168.123.109:0/3542784716 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f67b01b95c0 con 0x7f67b01a90e0 2026-03-09T20:18:40.602 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.602+0000 7f67b7d0d640 1 -- 192.168.123.109:0/3542784716 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f67b01b9b20 con 0x7f67b01a90e0 2026-03-09T20:18:40.604 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.603+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f67a0004df0 con 0x7f67b01a90e0 2026-03-09T20:18:40.604 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.603+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f67a0005d00 con 0x7f67b01a90e0 2026-03-09T20:18:40.604 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.604+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2746147771 0 0) 0x7f67a0012330 con 0x7f67b01a90e0 2026-03-09T20:18:40.605 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.604+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 2499184964 0 0) 0x7f67a004e2e0 con 0x7f67b01a90e0 2026-03-09T20:18:40.606 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.604+0000 7f67b7d0d640 1 -- 192.168.123.109:0/3542784716 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6778005180 con 0x7f67b01a90e0 2026-03-09T20:18:40.609 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.608+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f67a0018850 con 0x7f67b01a90e0 2026-03-09T20:18:40.750 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.750+0000 7f67b7d0d640 1 -- 192.168.123.109:0/3542784716 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7f6778005470 con 0x7f67b01a90e0 2026-03-09T20:18:40.751 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:18:40.751 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":3,"fsid":"c0151936-1bf4-11f1-b896-23f7bea8a6ea","modified":"2026-03-09T20:18:35.584014Z","created":"2026-03-09T20:17:53.169307Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6790","nonce":0}]},"addr":"192.168.123.105:6790/0","public_addr":"192.168.123.105:6790/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T20:18:40.752 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.751+0000 7f679effd640 1 -- 192.168.123.109:0/3542784716 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 3 v3) ==== 95+0+1145 (unknown 409899127 0 2652858422) 0x7f67a0021380 con 0x7f67b01a90e0 2026-03-09T20:18:40.752 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 3 2026-03-09T20:18:40.753 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.753+0000 7f67b7d0d640 1 -- 192.168.123.109:0/3542784716 >> v1:192.168.123.105:6800/3290461294 conn(0x7f679803eef0 legacy=0x7f67980413b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:40.753 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.753+0000 7f67b7d0d640 1 -- 192.168.123.109:0/3542784716 >> v1:192.168.123.105:6789/0 conn(0x7f67b01a90e0 legacy=0x7f67b01b2d40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:40.754 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.753+0000 7f67b7d0d640 1 -- 192.168.123.109:0/3542784716 shutdown_connections 2026-03-09T20:18:40.754 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.753+0000 7f67b7d0d640 1 -- 192.168.123.109:0/3542784716 >> 192.168.123.109:0/3542784716 conn(0x7f67b0100250 msgr2=0x7f67b010ced0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:40.754 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.753+0000 7f67b7d0d640 1 -- 192.168.123.109:0/3542784716 shutdown_connections 2026-03-09T20:18:40.754 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:40.754+0000 7f67b7d0d640 1 -- 192.168.123.109:0/3542784716 wait complete. 2026-03-09T20:18:40.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:40.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: mon.a calling monitor election 2026-03-09T20:18:40.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: mon.b calling monitor election 2026-03-09T20:18:40.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:40.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:40.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:40.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:40.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:40.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:40.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:40.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:40.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:40.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: monmap epoch 3 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: last_changed 2026-03-09T20:18:35.584014+0000 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: created 2026-03-09T20:17:53.169307+0000 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: min_mon_release 19 (squid) 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: election_strategy: 1 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: 1: v1:192.168.123.109:6789/0 mon.b 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: 2: v1:192.168.123.105:6790/0 mon.c 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: fsmap 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: mgrmap e13: y(active, since 23s) 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: overall HEALTH_OK 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:40.923 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T20:18:40.924 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph config generate-minimal-conf 2026-03-09T20:18:41.099 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:18:41.250 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.250+0000 7fee95cf8640 1 -- 192.168.123.105:0/1290954695 >> v1:192.168.123.105:6789/0 conn(0x7fee9010a4e0 legacy=0x7fee9010a8c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:41.250 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.251+0000 7fee95cf8640 1 -- 192.168.123.105:0/1290954695 shutdown_connections 2026-03-09T20:18:41.250 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.251+0000 7fee95cf8640 1 -- 192.168.123.105:0/1290954695 >> 192.168.123.105:0/1290954695 conn(0x7fee90100170 msgr2=0x7fee90102590 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:41.250 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.251+0000 7fee95cf8640 1 -- 192.168.123.105:0/1290954695 shutdown_connections 2026-03-09T20:18:41.250 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.251+0000 7fee95cf8640 1 -- 192.168.123.105:0/1290954695 wait complete. 2026-03-09T20:18:41.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.251+0000 7fee95cf8640 1 Processor -- start 2026-03-09T20:18:41.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.251+0000 7fee95cf8640 1 -- start start 2026-03-09T20:18:41.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.252+0000 7fee95cf8640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fee901ab4a0 con 0x7fee9010a4e0 2026-03-09T20:18:41.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.252+0000 7fee95cf8640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fee901ac6a0 con 0x7fee901a7770 2026-03-09T20:18:41.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.252+0000 7fee95cf8640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fee901ad8a0 con 0x7fee9019bbf0 2026-03-09T20:18:41.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.252+0000 7fee8effd640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7fee9019bbf0 0x7fee901a6050 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:52942/0 (socket says 192.168.123.105:52942) 2026-03-09T20:18:41.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.252+0000 7fee8effd640 1 -- 192.168.123.105:0/3171058630 learned_addr learned my addr 192.168.123.105:0/3171058630 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:41.252 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.252+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1191879318 0 0) 0x7fee901ac6a0 con 0x7fee901a7770 2026-03-09T20:18:41.252 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.252+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fee60003620 con 0x7fee901a7770 2026-03-09T20:18:41.252 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.253+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 293929916 0 0) 0x7fee901ab4a0 con 0x7fee9010a4e0 2026-03-09T20:18:41.252 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.253+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fee901ac6a0 con 0x7fee9010a4e0 2026-03-09T20:18:41.252 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.253+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3289181540 0 0) 0x7fee60003620 con 0x7fee901a7770 2026-03-09T20:18:41.252 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.253+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fee901ab4a0 con 0x7fee901a7770 2026-03-09T20:18:41.252 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.253+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fee7c003040 con 0x7fee901a7770 2026-03-09T20:18:41.252 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.253+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1541171534 0 0) 0x7fee901ab4a0 con 0x7fee901a7770 2026-03-09T20:18:41.252 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.253+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 >> v1:192.168.123.105:6790/0 conn(0x7fee9019bbf0 legacy=0x7fee901a6050 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:41.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.253+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 >> v1:192.168.123.105:6789/0 conn(0x7fee9010a4e0 legacy=0x7fee9019b4e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:41.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.253+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fee901aeaa0 con 0x7fee901a7770 2026-03-09T20:18:41.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.253+0000 7fee95cf8640 1 -- 192.168.123.105:0/3171058630 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fee901ac8d0 con 0x7fee901a7770 2026-03-09T20:18:41.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.253+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fee7c003b20 con 0x7fee901a7770 2026-03-09T20:18:41.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.253+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fee7c004af0 con 0x7fee901a7770 2026-03-09T20:18:41.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.254+0000 7fee95cf8640 1 -- 192.168.123.105:0/3171058630 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fee901ace80 con 0x7fee901a7770 2026-03-09T20:18:41.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.254+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2746147771 0 0) 0x7fee7c0036d0 con 0x7fee901a7770 2026-03-09T20:18:41.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.255+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 2499184964 0 0) 0x7fee7c015c80 con 0x7fee901a7770 2026-03-09T20:18:41.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.255+0000 7fee95cf8640 1 -- 192.168.123.105:0/3171058630 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fee9019c840 con 0x7fee901a7770 2026-03-09T20:18:41.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.258+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fee7c01a5d0 con 0x7fee901a7770 2026-03-09T20:18:41.353 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.354+0000 7fee95cf8640 1 -- 192.168.123.105:0/3171058630 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "config generate-minimal-conf"} v 0) -- 0x7fee9010eff0 con 0x7fee901a7770 2026-03-09T20:18:41.354 INFO:teuthology.orchestra.run.vm05.stdout:# minimal ceph.conf for c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:41.354 INFO:teuthology.orchestra.run.vm05.stdout:[global] 2026-03-09T20:18:41.354 INFO:teuthology.orchestra.run.vm05.stdout: fsid = c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:41.355 INFO:teuthology.orchestra.run.vm05.stdout: mon_host = 192.168.123.105:6789/0 192.168.123.109:6789/0 v1:192.168.123.105:6790/0 2026-03-09T20:18:41.355 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.355+0000 7fee8cff9640 1 -- 192.168.123.105:0/3171058630 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v9) ==== 76+0+199 (unknown 2189217731 0 3089923300) 0x7fee7c01f510 con 0x7fee901a7770 2026-03-09T20:18:41.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.357+0000 7fee95cf8640 1 -- 192.168.123.105:0/3171058630 >> v1:192.168.123.105:6800/3290461294 conn(0x7fee6003ea90 legacy=0x7fee60040f50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:41.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.357+0000 7fee95cf8640 1 -- 192.168.123.105:0/3171058630 >> v1:192.168.123.109:6789/0 conn(0x7fee901a7770 legacy=0x7fee901a9ba0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:41.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.357+0000 7fee95cf8640 1 -- 192.168.123.105:0/3171058630 shutdown_connections 2026-03-09T20:18:41.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.357+0000 7fee95cf8640 1 -- 192.168.123.105:0/3171058630 >> 192.168.123.105:0/3171058630 conn(0x7fee90100170 msgr2=0x7fee9010bce0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:41.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.357+0000 7fee95cf8640 1 -- 192.168.123.105:0/3171058630 shutdown_connections 2026-03-09T20:18:41.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:41.357+0000 7fee95cf8640 1 -- 192.168.123.105:0/3171058630 wait complete. 2026-03-09T20:18:41.507 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T20:18:41.507 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:18:41.507 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T20:18:41.549 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:18:41.549 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:18:41.617 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T20:18:41.617 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T20:18:41.640 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T20:18:41.640 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:18:41.704 INFO:tasks.cephadm:Adding mgr.y on vm05 2026-03-09T20:18:41.704 INFO:tasks.cephadm:Adding mgr.x on vm09 2026-03-09T20:18:41.704 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch apply mgr '2;vm05=y;vm09=x' 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: Updating vm05:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/3542784716' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: Updating vm09:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: Reconfiguring mon.a (unknown last config time)... 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: Reconfiguring daemon mon.a on vm05 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3171058630' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: Reconfiguring mon.c (monmap changed)... 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:41.741 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: Reconfiguring daemon mon.c on vm05 2026-03-09T20:18:41.742 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:41.742 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:41.912 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:18:42.048 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.047+0000 7f10bbadd640 1 -- 192.168.123.109:0/1885817903 >> v1:192.168.123.105:6789/0 conn(0x7f10b4104610 legacy=0x7f10b4104a10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:42.048 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.047+0000 7f10bbadd640 1 -- 192.168.123.109:0/1885817903 shutdown_connections 2026-03-09T20:18:42.048 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.047+0000 7f10bbadd640 1 -- 192.168.123.109:0/1885817903 >> 192.168.123.109:0/1885817903 conn(0x7f10b40ffda0 msgr2=0x7f10b41021e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:42.048 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.048+0000 7f10bbadd640 1 -- 192.168.123.109:0/1885817903 shutdown_connections 2026-03-09T20:18:42.048 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.048+0000 7f10bbadd640 1 -- 192.168.123.109:0/1885817903 wait complete. 2026-03-09T20:18:42.048 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.048+0000 7f10bbadd640 1 Processor -- start 2026-03-09T20:18:42.048 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.048+0000 7f10bbadd640 1 -- start start 2026-03-09T20:18:42.049 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.048+0000 7f10bbadd640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f10b41aae00 con 0x7f10b407b4d0 2026-03-09T20:18:42.049 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.048+0000 7f10bbadd640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f10b41abfe0 con 0x7f10b4104610 2026-03-09T20:18:42.049 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.048+0000 7f10bbadd640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f10b41ad200 con 0x7f10b4078040 2026-03-09T20:18:42.049 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.049+0000 7f10b9051640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f10b407b4d0 0x7f10b4077900 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:34038/0 (socket says 192.168.123.109:34038) 2026-03-09T20:18:42.049 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.049+0000 7f10b9051640 1 -- 192.168.123.109:0/1902280486 learned_addr learned my addr 192.168.123.109:0/1902280486 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:18:42.049 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.049+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2843313637 0 0) 0x7f10b41aae00 con 0x7f10b407b4d0 2026-03-09T20:18:42.049 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.049+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f1090003620 con 0x7f10b407b4d0 2026-03-09T20:18:42.050 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.049+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 65076963 0 0) 0x7f10b41abfe0 con 0x7f10b4104610 2026-03-09T20:18:42.050 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.049+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f10b41aae00 con 0x7f10b4104610 2026-03-09T20:18:42.050 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.049+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1020832911 0 0) 0x7f1090003620 con 0x7f10b407b4d0 2026-03-09T20:18:42.050 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.049+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f10b41abfe0 con 0x7f10b407b4d0 2026-03-09T20:18:42.050 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.049+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f10a80030c0 con 0x7f10b407b4d0 2026-03-09T20:18:42.050 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.050+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 842071658 0 0) 0x7f10b41aae00 con 0x7f10b4104610 2026-03-09T20:18:42.050 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.050+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f1090003620 con 0x7f10b4104610 2026-03-09T20:18:42.050 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.050+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f10a4002bc0 con 0x7f10b4104610 2026-03-09T20:18:42.050 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.050+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1128494474 0 0) 0x7f10b41abfe0 con 0x7f10b407b4d0 2026-03-09T20:18:42.051 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.050+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 >> v1:192.168.123.105:6790/0 conn(0x7f10b4078040 legacy=0x7f10b41a96e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:42.051 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.050+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 >> v1:192.168.123.109:6789/0 conn(0x7f10b4104610 legacy=0x7f10b407adc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:42.051 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.050+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f10b41ae400 con 0x7f10b407b4d0 2026-03-09T20:18:42.051 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.050+0000 7f10bbadd640 1 -- 192.168.123.109:0/1902280486 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f10b41ad430 con 0x7f10b407b4d0 2026-03-09T20:18:42.051 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.050+0000 7f10bbadd640 1 -- 192.168.123.109:0/1902280486 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f10b41ada60 con 0x7f10b407b4d0 2026-03-09T20:18:42.051 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.050+0000 7f10bbadd640 1 -- 192.168.123.109:0/1902280486 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f107c005180 con 0x7f10b407b4d0 2026-03-09T20:18:42.051 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.051+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f10a8004df0 con 0x7f10b407b4d0 2026-03-09T20:18:42.051 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.051+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f10a8005d50 con 0x7f10b407b4d0 2026-03-09T20:18:42.052 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.051+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2746147771 0 0) 0x7f10a80049a0 con 0x7f10b407b4d0 2026-03-09T20:18:42.052 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.051+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 2499184964 0 0) 0x7f10a804d400 con 0x7f10b407b4d0 2026-03-09T20:18:42.054 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.054+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f10a80204a0 con 0x7f10b407b4d0 2026-03-09T20:18:42.148 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.148+0000 7f10bbadd640 1 -- 192.168.123.109:0/1902280486 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm05=y;vm09=x", "target": ["mon-mgr", ""]}) -- 0x7f107c002bf0 con 0x7f109003eb70 2026-03-09T20:18:42.154 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mgr update... 2026-03-09T20:18:42.154 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.153+0000 7f10a2ffd640 1 -- 192.168.123.109:0/1902280486 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (unknown 0 0 325935098) 0x7f107c002bf0 con 0x7f109003eb70 2026-03-09T20:18:42.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.155+0000 7f10a0fb9640 1 -- 192.168.123.109:0/1902280486 >> v1:192.168.123.105:6800/3290461294 conn(0x7f109003eb70 legacy=0x7f1090041030 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:42.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.155+0000 7f10a0fb9640 1 -- 192.168.123.109:0/1902280486 >> v1:192.168.123.105:6789/0 conn(0x7f10b407b4d0 legacy=0x7f10b4077900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:42.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.155+0000 7f10a0fb9640 1 -- 192.168.123.109:0/1902280486 shutdown_connections 2026-03-09T20:18:42.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.155+0000 7f10a0fb9640 1 -- 192.168.123.109:0/1902280486 >> 192.168.123.109:0/1902280486 conn(0x7f10b40ffda0 msgr2=0x7f10b41021e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:42.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.155+0000 7f10a0fb9640 1 -- 192.168.123.109:0/1902280486 shutdown_connections 2026-03-09T20:18:42.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:18:42.155+0000 7f10a0fb9640 1 -- 192.168.123.109:0/1902280486 wait complete. 2026-03-09T20:18:42.306 DEBUG:teuthology.orchestra.run.vm09:mgr.x> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mgr.x.service 2026-03-09T20:18:42.307 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T20:18:42.307 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:18:42.307 DEBUG:teuthology.orchestra.run.vm05:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T20:18:42.323 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:18:42.323 DEBUG:teuthology.orchestra.run.vm05:> ls /dev/[sv]d? 2026-03-09T20:18:42.379 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vda 2026-03-09T20:18:42.379 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdb 2026-03-09T20:18:42.379 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdc 2026-03-09T20:18:42.379 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdd 2026-03-09T20:18:42.379 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vde 2026-03-09T20:18:42.379 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T20:18:42.379 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T20:18:42.379 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdb 2026-03-09T20:18:42.437 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdb 2026-03-09T20:18:42.437 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:18:42.437 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-09T20:18:42.437 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:18:42.437 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:18:42.437 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-09 20:18:22.904080602 +0000 2026-03-09T20:18:42.437 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-09 20:14:19.323380415 +0000 2026-03-09T20:18:42.437 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-09 20:14:19.323380415 +0000 2026-03-09T20:18:42.437 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-09 20:10:58.295000000 +0000 2026-03-09T20:18:42.437 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T20:18:42.500 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-09T20:18:42.500 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-09T20:18:42.500 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000157614 s, 3.2 MB/s 2026-03-09T20:18:42.501 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T20:18:42.557 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdc 2026-03-09T20:18:42.617 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdc 2026-03-09T20:18:42.617 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:18:42.617 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-09T20:18:42.617 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:18:42.617 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:18:42.617 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-09 20:18:22.962080673 +0000 2026-03-09T20:18:42.617 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-09 20:14:19.325380415 +0000 2026-03-09T20:18:42.617 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-09 20:14:19.325380415 +0000 2026-03-09T20:18:42.617 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-09 20:10:58.306000000 +0000 2026-03-09T20:18:42.617 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T20:18:42.680 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-09T20:18:42.681 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-09T20:18:42.681 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000152966 s, 3.3 MB/s 2026-03-09T20:18:42.681 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T20:18:42.737 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdd 2026-03-09T20:18:42.795 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdd 2026-03-09T20:18:42.795 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:18:42.795 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-09T20:18:42.795 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:18:42.795 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:18:42.795 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-09 20:18:23.050080780 +0000 2026-03-09T20:18:42.795 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-09 20:14:19.309380415 +0000 2026-03-09T20:18:42.795 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-09 20:14:19.309380415 +0000 2026-03-09T20:18:42.795 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-09 20:10:58.310000000 +0000 2026-03-09T20:18:42.795 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: Deploying daemon mon.c on vm05 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: mon.a calling monitor election 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/3254033658' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: mon.b calling monitor election 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: monmap epoch 2 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: last_changed 2026-03-09T20:18:30.471526+0000 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: created 2026-03-09T20:17:53.169307+0000 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: min_mon_release 19 (squid) 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: election_strategy: 1 2026-03-09T20:18:42.857 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: 1: v1:192.168.123.109:6789/0 mon.b 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: fsmap 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: mgrmap e13: y(active, since 18s) 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: overall HEALTH_OK 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: mon.a calling monitor election 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: mon.b calling monitor election 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: monmap epoch 3 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: last_changed 2026-03-09T20:18:35.584014+0000 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: created 2026-03-09T20:17:53.169307+0000 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: min_mon_release 19 (squid) 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: election_strategy: 1 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: 1: v1:192.168.123.109:6789/0 mon.b 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: 2: v1:192.168.123.105:6790/0 mon.c 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: fsmap 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: mgrmap e13: y(active, since 23s) 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: overall HEALTH_OK 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: Updating vm05:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/3542784716' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: Updating vm09:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: Reconfiguring mon.a (unknown last config time)... 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: Reconfiguring daemon mon.a on vm05 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3171058630' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: Reconfiguring mon.c (monmap changed)... 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: Reconfiguring daemon mon.c on vm05 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:42.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:42 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:42.859 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-09T20:18:42.859 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-09T20:18:42.859 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000136407 s, 3.8 MB/s 2026-03-09T20:18:42.859 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T20:18:42.915 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vde 2026-03-09T20:18:42.972 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vde 2026-03-09T20:18:42.972 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:18:42.972 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-09T20:18:42.972 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:18:42.972 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:18:42.972 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-09 20:18:23.145080896 +0000 2026-03-09T20:18:42.972 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-09 20:14:19.311380415 +0000 2026-03-09T20:18:42.972 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-09 20:14:19.311380415 +0000 2026-03-09T20:18:42.972 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-09 20:10:58.314000000 +0000 2026-03-09T20:18:42.972 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T20:18:43.036 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-09T20:18:43.036 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-09T20:18:43.036 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000147827 s, 3.5 MB/s 2026-03-09T20:18:43.037 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T20:18:43.094 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T20:18:43.094 DEBUG:teuthology.orchestra.run.vm09:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T20:18:43.119 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:18:43.119 DEBUG:teuthology.orchestra.run.vm09:> ls /dev/[sv]d? 2026-03-09T20:18:43.151 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:43.150+0000 7f9499c13140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T20:18:43.232 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vda 2026-03-09T20:18:43.232 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdb 2026-03-09T20:18:43.232 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdc 2026-03-09T20:18:43.232 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdd 2026-03-09T20:18:43.232 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vde 2026-03-09T20:18:43.232 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T20:18:43.232 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T20:18:43.232 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdb 2026-03-09T20:18:43.249 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdb 2026-03-09T20:18:43.250 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:18:43.250 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-09T20:18:43.250 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:18:43.250 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:18:43.250 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 20:18:27.570545781 +0000 2026-03-09T20:18:43.250 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 20:14:18.848789675 +0000 2026-03-09T20:18:43.250 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 20:14:18.848789675 +0000 2026-03-09T20:18:43.250 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-09 20:11:29.294000000 +0000 2026-03-09T20:18:43.250 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T20:18:43.363 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T20:18:43.363 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T20:18:43.363 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000157465 s, 3.3 MB/s 2026-03-09T20:18:43.365 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T20:18:43.421 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdc 2026-03-09T20:18:43.497 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdc 2026-03-09T20:18:43.498 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:18:43.498 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-09T20:18:43.498 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:18:43.498 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:18:43.498 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 20:18:27.606545822 +0000 2026-03-09T20:18:43.498 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 20:14:18.850789677 +0000 2026-03-09T20:18:43.498 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 20:14:18.850789677 +0000 2026-03-09T20:18:43.498 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-09 20:11:29.300000000 +0000 2026-03-09T20:18:43.498 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T20:18:43.547 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T20:18:43.547 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T20:18:43.547 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000167894 s, 3.0 MB/s 2026-03-09T20:18:43.548 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T20:18:43.631 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdd 2026-03-09T20:18:43.652 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdd 2026-03-09T20:18:43.652 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:18:43.652 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-09T20:18:43.652 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:18:43.652 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:18:43.652 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 20:18:27.630545849 +0000 2026-03-09T20:18:43.652 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 20:14:18.816789648 +0000 2026-03-09T20:18:43.652 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 20:14:18.816789648 +0000 2026-03-09T20:18:43.652 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-09 20:11:29.307000000 +0000 2026-03-09T20:18:43.652 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: mon.c calling monitor election 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: mon.c calling monitor election 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: mon.a calling monitor election 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: mon.b calling monitor election 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: monmap epoch 3 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: last_changed 2026-03-09T20:18:35.584014+0000 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: created 2026-03-09T20:17:53.169307+0000 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: min_mon_release 19 (squid) 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: election_strategy: 1 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: 1: v1:192.168.123.109:6789/0 mon.b 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: 2: v1:192.168.123.105:6790/0 mon.c 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: fsmap 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: mgrmap e13: y(active, since 25s) 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: overall HEALTH_OK 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: mon.c calling monitor election 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: mon.c calling monitor election 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: mon.a calling monitor election 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: mon.b calling monitor election 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: monmap epoch 3 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: last_changed 2026-03-09T20:18:35.584014+0000 2026-03-09T20:18:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: created 2026-03-09T20:17:53.169307+0000 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: min_mon_release 19 (squid) 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: election_strategy: 1 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: 1: v1:192.168.123.109:6789/0 mon.b 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: 2: v1:192.168.123.105:6790/0 mon.c 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: fsmap 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: mgrmap e13: y(active, since 25s) 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: overall HEALTH_OK 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:43.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:43 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:43.716 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:43.611+0000 7f9499c13140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T20:18:43.720 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T20:18:43.720 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T20:18:43.720 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.00011776 s, 4.3 MB/s 2026-03-09T20:18:43.720 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T20:18:43.782 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vde 2026-03-09T20:18:43.844 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vde 2026-03-09T20:18:43.844 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T20:18:43.844 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-09T20:18:43.844 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:18:43.844 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T20:18:43.844 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 20:18:27.654545876 +0000 2026-03-09T20:18:43.844 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 20:14:18.878789701 +0000 2026-03-09T20:18:43.844 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 20:14:18.878789701 +0000 2026-03-09T20:18:43.844 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-09 20:11:29.311000000 +0000 2026-03-09T20:18:43.845 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T20:18:43.914 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T20:18:43.914 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T20:18:43.914 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000173496 s, 3.0 MB/s 2026-03-09T20:18:43.915 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T20:18:43.980 INFO:tasks.cephadm:Deploying osd.0 on vm05 with /dev/vde... 2026-03-09T20:18:43.980 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- lvm zap /dev/vde 2026-03-09T20:18:44.022 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:43.949+0000 7f9499c13140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T20:18:44.161 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:44.435 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:44.436 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:44.436 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:44.436 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:44.436 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:44.436 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:44.522 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T20:18:44.522 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T20:18:44.522 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: from numpy import show_config as show_numpy_config 2026-03-09T20:18:44.522 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:44.040+0000 7f9499c13140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T20:18:44.523 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:44.080+0000 7f9499c13140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T20:18:44.523 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:44.151+0000 7f9499c13140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T20:18:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:44 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:44.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:18:44 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:18:44.583+0000 7fcab79d4640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-09T20:18:45.022 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:44.632+0000 7f9499c13140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T20:18:45.022 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:44.741+0000 7f9499c13140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:18:45.022 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:44.777+0000 7f9499c13140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T20:18:45.022 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:44.809+0000 7f9499c13140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T20:18:45.022 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:44.847+0000 7f9499c13140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T20:18:45.022 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:44.881+0000 7f9499c13140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T20:18:45.179 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:18:45.199 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch daemon add osd vm05:/dev/vde 2026-03-09T20:18:45.303 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:45 vm09 ceph-mon[54524]: Reconfiguring mgr.y (unknown last config time)... 2026-03-09T20:18:45.303 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:45 vm09 ceph-mon[54524]: Reconfiguring daemon mgr.y on vm05 2026-03-09T20:18:45.303 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:45.047+0000 7f9499c13140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T20:18:45.303 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:45.095+0000 7f9499c13140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T20:18:45.360 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:18:45.385 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:45 vm05 ceph-mon[51870]: Reconfiguring mgr.y (unknown last config time)... 2026-03-09T20:18:45.385 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:45 vm05 ceph-mon[51870]: Reconfiguring daemon mgr.y on vm05 2026-03-09T20:18:45.385 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:45 vm05 ceph-mon[61345]: Reconfiguring mgr.y (unknown last config time)... 2026-03-09T20:18:45.385 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:45 vm05 ceph-mon[61345]: Reconfiguring daemon mgr.y on vm05 2026-03-09T20:18:45.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.506+0000 7f225f7b2640 1 -- 192.168.123.105:0/964701516 >> v1:192.168.123.105:6790/0 conn(0x7f2258104a80 legacy=0x7f2258106f00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:45.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.507+0000 7f225f7b2640 1 -- 192.168.123.105:0/964701516 shutdown_connections 2026-03-09T20:18:45.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.507+0000 7f225f7b2640 1 -- 192.168.123.105:0/964701516 >> 192.168.123.105:0/964701516 conn(0x7f22580fcd60 msgr2=0x7f22580ff1a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:45.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.507+0000 7f225f7b2640 1 -- 192.168.123.105:0/964701516 shutdown_connections 2026-03-09T20:18:45.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.507+0000 7f225f7b2640 1 -- 192.168.123.105:0/964701516 wait complete. 2026-03-09T20:18:45.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.508+0000 7f225f7b2640 1 Processor -- start 2026-03-09T20:18:45.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.508+0000 7f225f7b2640 1 -- start start 2026-03-09T20:18:45.508 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.508+0000 7f225f7b2640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f22581a0c80 con 0x7f2258111110 2026-03-09T20:18:45.508 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.508+0000 7f225f7b2640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f22581ac440 con 0x7f2258100f50 2026-03-09T20:18:45.508 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.508+0000 7f225f7b2640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f22581ad620 con 0x7f2258104a80 2026-03-09T20:18:45.508 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.508+0000 7f225d527640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f2258100f50 0x7f22581a0100 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:60512/0 (socket says 192.168.123.105:60512) 2026-03-09T20:18:45.508 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.508+0000 7f225d527640 1 -- 192.168.123.105:0/2235939340 learned_addr learned my addr 192.168.123.105:0/2235939340 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:45.508 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.509+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1771697034 0 0) 0x7f22581ad620 con 0x7f2258104a80 2026-03-09T20:18:45.508 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.509+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2234003620 con 0x7f2258104a80 2026-03-09T20:18:45.509 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.509+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4013637629 0 0) 0x7f2234003620 con 0x7f2258104a80 2026-03-09T20:18:45.509 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.509+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f22581ad620 con 0x7f2258104a80 2026-03-09T20:18:45.509 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.509+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f224c0031f0 con 0x7f2258104a80 2026-03-09T20:18:45.509 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.510+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 4227658090 0 0) 0x7f22581ad620 con 0x7f2258104a80 2026-03-09T20:18:45.509 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.510+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 >> v1:192.168.123.109:6789/0 conn(0x7f2258100f50 legacy=0x7f22581a0100 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:45.510 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.510+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 >> v1:192.168.123.105:6789/0 conn(0x7f2258111110 legacy=0x7f22581a9d10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:45.511 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.510+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f22581ae800 con 0x7f2258104a80 2026-03-09T20:18:45.511 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.510+0000 7f225f7b2640 1 -- 192.168.123.105:0/2235939340 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f22581ac670 con 0x7f2258104a80 2026-03-09T20:18:45.511 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.510+0000 7f225f7b2640 1 -- 192.168.123.105:0/2235939340 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f22581acc50 con 0x7f2258104a80 2026-03-09T20:18:45.511 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.510+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f224c004600 con 0x7f2258104a80 2026-03-09T20:18:45.511 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.510+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f224c004e20 con 0x7f2258104a80 2026-03-09T20:18:45.511 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.511+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2746147771 0 0) 0x7f224c011430 con 0x7f2258104a80 2026-03-09T20:18:45.511 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.511+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 2499184964 0 0) 0x7f224c04d2c0 con 0x7f2258104a80 2026-03-09T20:18:45.511 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.512+0000 7f225f7b2640 1 -- 192.168.123.105:0/2235939340 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2220005180 con 0x7f2258104a80 2026-03-09T20:18:45.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.516+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f224c0178f0 con 0x7f2258104a80 2026-03-09T20:18:45.577 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:45.302+0000 7f9499c13140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T20:18:45.577 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:45.576+0000 7f9499c13140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T20:18:45.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:45.616+0000 7f225f7b2640 1 -- 192.168.123.105:0/2235939340 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}) -- 0x7f2220002bf0 con 0x7f223403ea00 2026-03-09T20:18:45.846 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:45.614+0000 7f9499c13140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T20:18:45.846 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:45.655+0000 7f9499c13140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T20:18:45.846 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:45.732+0000 7f9499c13140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T20:18:45.846 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:45.768+0000 7f9499c13140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T20:18:45.846 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:45.845+0000 7f9499c13140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T20:18:46.118 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:45.951+0000 7f9499c13140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:18:46.118 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:46 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:46.082+0000 7f9499c13140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T20:18:46.118 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:18:46 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:18:46.117+0000 7f9499c13140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[51870]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[51870]: Standby manager daemon x started 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[51870]: from='mgr.? v1:192.168.123.109:0/14050332' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[51870]: from='mgr.? v1:192.168.123.109:0/14050332' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[51870]: from='mgr.? v1:192.168.123.109:0/14050332' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[51870]: from='mgr.? v1:192.168.123.109:0/14050332' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[61345]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[61345]: Standby manager daemon x started 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[61345]: from='mgr.? v1:192.168.123.109:0/14050332' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[61345]: from='mgr.? v1:192.168.123.109:0/14050332' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[61345]: from='mgr.? v1:192.168.123.109:0/14050332' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T20:18:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:46 vm05 ceph-mon[61345]: from='mgr.? v1:192.168.123.109:0/14050332' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:18:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:46 vm09 ceph-mon[54524]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:46 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:46 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:18:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:46 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:18:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:46 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:46 vm09 ceph-mon[54524]: Standby manager daemon x started 2026-03-09T20:18:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:46 vm09 ceph-mon[54524]: from='mgr.? v1:192.168.123.109:0/14050332' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T20:18:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:46 vm09 ceph-mon[54524]: from='mgr.? v1:192.168.123.109:0/14050332' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:18:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:46 vm09 ceph-mon[54524]: from='mgr.? v1:192.168.123.109:0/14050332' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T20:18:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:46 vm09 ceph-mon[54524]: from='mgr.? v1:192.168.123.109:0/14050332' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:18:46.621 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:46.621+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mgrmap(e 14) ==== 99944+0+0 (unknown 1147942814 0 0) 0x7f224c013280 con 0x7f2258104a80 2026-03-09T20:18:47.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[51870]: from='client.24110 v1:192.168.123.105:0/2235939340' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/928217310' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "35c6a684-ee69-44bf-83ae-27ddd2fd2486"}]: dispatch 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/928217310' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "35c6a684-ee69-44bf-83ae-27ddd2fd2486"}]': finished 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[51870]: osdmap e5: 1 total, 0 up, 1 in 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[51870]: mgrmap e14: y(active, since 29s), standbys: x 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1586649788' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[61345]: from='client.24110 v1:192.168.123.105:0/2235939340' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/928217310' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "35c6a684-ee69-44bf-83ae-27ddd2fd2486"}]: dispatch 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/928217310' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "35c6a684-ee69-44bf-83ae-27ddd2fd2486"}]': finished 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[61345]: osdmap e5: 1 total, 0 up, 1 in 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[61345]: mgrmap e14: y(active, since 29s), standbys: x 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T20:18:47.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1586649788' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:18:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:47 vm09 ceph-mon[54524]: from='client.24110 v1:192.168.123.105:0/2235939340' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/928217310' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "35c6a684-ee69-44bf-83ae-27ddd2fd2486"}]: dispatch 2026-03-09T20:18:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/928217310' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "35c6a684-ee69-44bf-83ae-27ddd2fd2486"}]': finished 2026-03-09T20:18:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:47 vm09 ceph-mon[54524]: osdmap e5: 1 total, 0 up, 1 in 2026-03-09T20:18:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:47 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:18:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:47 vm09 ceph-mon[54524]: mgrmap e14: y(active, since 29s), standbys: x 2026-03-09T20:18:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:47 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T20:18:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1586649788' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:18:48.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:48 vm09 ceph-mon[54524]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:48.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:48 vm05 ceph-mon[61345]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:48.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:48 vm05 ceph-mon[51870]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:50.640 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:50 vm05 ceph-mon[51870]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:50.641 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:50 vm05 ceph-mon[61345]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:50 vm09 ceph-mon[54524]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:51.762 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:51 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:18:51.762 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:51 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:51.762 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:51 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:18:51.762 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:51 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:52.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:51 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:18:52.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:51 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:52.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:52 vm05 ceph-mon[61345]: Deploying daemon osd.0 on vm05 2026-03-09T20:18:52.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:52 vm05 ceph-mon[61345]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:52.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:52 vm05 ceph-mon[51870]: Deploying daemon osd.0 on vm05 2026-03-09T20:18:52.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:52 vm05 ceph-mon[51870]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:53.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:52 vm09 ceph-mon[54524]: Deploying daemon osd.0 on vm05 2026-03-09T20:18:53.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:52 vm09 ceph-mon[54524]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:53 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:53 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:53 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:53 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:53 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:53 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:54.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:53 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:54.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:53 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:54.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:53 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:54.243 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 0 on host 'vm05' 2026-03-09T20:18:54.243 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:54.243+0000 7f22467fc640 1 -- 192.168.123.105:0/2235939340 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 4132516384) 0x7f2220002bf0 con 0x7f223403ea00 2026-03-09T20:18:54.246 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:54.245+0000 7f225f7b2640 1 -- 192.168.123.105:0/2235939340 >> v1:192.168.123.105:6800/3290461294 conn(0x7f223403ea00 legacy=0x7f2234040ec0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:54.246 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:54.245+0000 7f225f7b2640 1 -- 192.168.123.105:0/2235939340 >> v1:192.168.123.105:6790/0 conn(0x7f2258104a80 legacy=0x7f22581a65e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:54.246 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:54.245+0000 7f225f7b2640 1 -- 192.168.123.105:0/2235939340 shutdown_connections 2026-03-09T20:18:54.246 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:54.245+0000 7f225f7b2640 1 -- 192.168.123.105:0/2235939340 >> 192.168.123.105:0/2235939340 conn(0x7f22580fcd60 msgr2=0x7f22580ff150 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:54.246 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:54.246+0000 7f225f7b2640 1 -- 192.168.123.105:0/2235939340 shutdown_connections 2026-03-09T20:18:54.246 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:54.246+0000 7f225f7b2640 1 -- 192.168.123.105:0/2235939340 wait complete. 2026-03-09T20:18:54.394 DEBUG:teuthology.orchestra.run.vm05:osd.0> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.0.service 2026-03-09T20:18:54.395 INFO:tasks.cephadm:Deploying osd.1 on vm05 with /dev/vdd... 2026-03-09T20:18:54.395 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- lvm zap /dev/vdd 2026-03-09T20:18:54.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[51870]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:54.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:54.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:54.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[61345]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:54.708 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:18:54.951 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:18:54 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0[65089]: 2026-03-09T20:18:54.898+0000 7f0ab8dc8740 -1 osd.0 0 log_to_monitors true 2026-03-09T20:18:55.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:54 vm09 ceph-mon[54524]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:55.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:55.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:55.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:55.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:55.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:55.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:55.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[51870]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[61345]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:55 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:56.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:55 vm09 ceph-mon[54524]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T20:18:56.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:55 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:56.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:55 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:56.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:55 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:18:56.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:55 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:56.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:55 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:18:56.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:55 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:18:56.180 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:18:56.197 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch daemon add osd vm05:/dev/vdd 2026-03-09T20:18:56.369 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:18:56.506 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.506+0000 7f8f6f577640 1 -- 192.168.123.105:0/2120267397 >> v1:192.168.123.109:6789/0 conn(0x7f8f70077aa0 legacy=0x7f8f70077ea0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:56.506 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.506+0000 7f8f6f577640 1 -- 192.168.123.105:0/2120267397 shutdown_connections 2026-03-09T20:18:56.506 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.506+0000 7f8f6f577640 1 -- 192.168.123.105:0/2120267397 >> 192.168.123.105:0/2120267397 conn(0x7f8f700fff30 msgr2=0x7f8f70102370 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:18:56.506 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.507+0000 7f8f6f577640 1 -- 192.168.123.105:0/2120267397 shutdown_connections 2026-03-09T20:18:56.506 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.507+0000 7f8f6f577640 1 -- 192.168.123.105:0/2120267397 wait complete. 2026-03-09T20:18:56.506 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.507+0000 7f8f6f577640 1 Processor -- start 2026-03-09T20:18:56.506 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.507+0000 7f8f6f577640 1 -- start start 2026-03-09T20:18:56.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.507+0000 7f8f6f577640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8f701a0cc0 con 0x7f8f7007abd0 2026-03-09T20:18:56.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.507+0000 7f8f6f577640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8f701ac480 con 0x7f8f70079f30 2026-03-09T20:18:56.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.507+0000 7f8f6f577640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8f701ad660 con 0x7f8f70077aa0 2026-03-09T20:18:56.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.507+0000 7f8f6e575640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f8f70077aa0 0x7f8f701a0140 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:58902/0 (socket says 192.168.123.105:58902) 2026-03-09T20:18:56.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.507+0000 7f8f6e575640 1 -- 192.168.123.105:0/2969955850 learned_addr learned my addr 192.168.123.105:0/2969955850 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:18:56.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.508+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1313848316 0 0) 0x7f8f701ad660 con 0x7f8f70077aa0 2026-03-09T20:18:56.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.508+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8f44003620 con 0x7f8f70077aa0 2026-03-09T20:18:56.508 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.508+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2295933685 0 0) 0x7f8f44003620 con 0x7f8f70077aa0 2026-03-09T20:18:56.508 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.508+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f8f701ad660 con 0x7f8f70077aa0 2026-03-09T20:18:56.508 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.508+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f8f5c002c40 con 0x7f8f70077aa0 2026-03-09T20:18:56.509 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.509+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2752937833 0 0) 0x7f8f701ad660 con 0x7f8f70077aa0 2026-03-09T20:18:56.509 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.509+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 >> v1:192.168.123.109:6789/0 conn(0x7f8f70079f30 legacy=0x7f8f701a6620 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:56.509 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.509+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 >> v1:192.168.123.105:6789/0 conn(0x7f8f7007abd0 legacy=0x7f8f701a9d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:18:56.509 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.509+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8f701ae840 con 0x7f8f70077aa0 2026-03-09T20:18:56.509 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.509+0000 7f8f6f577640 1 -- 192.168.123.105:0/2969955850 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f8f701ac6b0 con 0x7f8f70077aa0 2026-03-09T20:18:56.509 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.509+0000 7f8f6f577640 1 -- 192.168.123.105:0/2969955850 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f8f701acc90 con 0x7f8f70077aa0 2026-03-09T20:18:56.513 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.510+0000 7f8f6f577640 1 -- 192.168.123.105:0/2969955850 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8f34005180 con 0x7f8f70077aa0 2026-03-09T20:18:56.513 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.513+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f8f5c003440 con 0x7f8f70077aa0 2026-03-09T20:18:56.513 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.513+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f8f5c004f20 con 0x7f8f70077aa0 2026-03-09T20:18:56.513 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.513+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 14) ==== 99944+0+0 (unknown 1147942814 0 0) 0x7f8f5c0051a0 con 0x7f8f70077aa0 2026-03-09T20:18:56.513 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.513+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(6..6 src has 1..6) ==== 1275+0+0 (unknown 4259022377 0 0) 0x7f8f5c094d00 con 0x7f8f70077aa0 2026-03-09T20:18:56.513 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.513+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f8f5c005540 con 0x7f8f70077aa0 2026-03-09T20:18:56.623 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:18:56.623+0000 7f8f6f577640 1 -- 192.168.123.105:0/2969955850 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}) -- 0x7f8f34002bf0 con 0x7f8f440780d0 2026-03-09T20:18:56.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:56 vm05 ceph-mon[51870]: Detected new or changed devices on vm05 2026-03-09T20:18:56.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:56 vm05 ceph-mon[51870]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:56.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:56 vm05 ceph-mon[51870]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T20:18:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:56 vm05 ceph-mon[51870]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T20:18:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:56 vm05 ceph-mon[51870]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:18:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:56 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:18:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:56 vm05 ceph-mon[61345]: Detected new or changed devices on vm05 2026-03-09T20:18:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:56 vm05 ceph-mon[61345]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:56 vm05 ceph-mon[61345]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T20:18:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:56 vm05 ceph-mon[61345]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T20:18:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:56 vm05 ceph-mon[61345]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:18:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:56 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:18:57.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:56 vm09 ceph-mon[54524]: Detected new or changed devices on vm05 2026-03-09T20:18:57.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:56 vm09 ceph-mon[54524]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:57.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:56 vm09 ceph-mon[54524]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T20:18:57.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:56 vm09 ceph-mon[54524]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T20:18:57.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:56 vm09 ceph-mon[54524]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:18:57.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:56 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:18:57.617 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:18:57 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0[65089]: 2026-03-09T20:18:57.450+0000 7f0ab4d49640 -1 osd.0 0 waiting for initial osdmap 2026-03-09T20:18:57.617 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:18:57 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0[65089]: 2026-03-09T20:18:57.463+0000 7f0ab0372640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:18:57.617 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T20:18:57.617 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T20:18:57.617 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:18:57.617 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: from='client.24137 v1:192.168.123.105:0/2969955850' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/456827779' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4a3ff444-017e-44cd-9222-93f1d8dcc4db"}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: from='client.24145 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4a3ff444-017e-44cd-9222-93f1d8dcc4db"}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: from='client.24145 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4a3ff444-017e-44cd-9222-93f1d8dcc4db"}]': finished 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: osd.0 v1:192.168.123.105:6801/1625499026 boot 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: osdmap e8: 2 total, 1 up, 2 in 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: from='client.24137 v1:192.168.123.105:0/2969955850' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/456827779' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4a3ff444-017e-44cd-9222-93f1d8dcc4db"}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: from='client.24145 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4a3ff444-017e-44cd-9222-93f1d8dcc4db"}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: from='client.24145 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4a3ff444-017e-44cd-9222-93f1d8dcc4db"}]': finished 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: osd.0 v1:192.168.123.105:6801/1625499026 boot 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: osdmap e8: 2 total, 1 up, 2 in 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:18:57.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:57 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:18:58.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T20:18:58.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T20:18:58.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:18:58.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: from='client.24137 v1:192.168.123.105:0/2969955850' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:18:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:18:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:18:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: from='osd.0 v1:192.168.123.105:6801/1625499026' entity='osd.0' 2026-03-09T20:18:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:18:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/456827779' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4a3ff444-017e-44cd-9222-93f1d8dcc4db"}]: dispatch 2026-03-09T20:18:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: from='client.24145 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4a3ff444-017e-44cd-9222-93f1d8dcc4db"}]: dispatch 2026-03-09T20:18:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: from='client.24145 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4a3ff444-017e-44cd-9222-93f1d8dcc4db"}]': finished 2026-03-09T20:18:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: osd.0 v1:192.168.123.105:6801/1625499026 boot 2026-03-09T20:18:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: osdmap e8: 2 total, 1 up, 2 in 2026-03-09T20:18:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:18:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:57 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:18:58.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:58 vm05 ceph-mon[51870]: purged_snaps scrub starts 2026-03-09T20:18:58.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:58 vm05 ceph-mon[51870]: purged_snaps scrub ok 2026-03-09T20:18:58.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2053054613' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:18:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:58 vm05 ceph-mon[61345]: purged_snaps scrub starts 2026-03-09T20:18:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:58 vm05 ceph-mon[61345]: purged_snaps scrub ok 2026-03-09T20:18:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2053054613' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:18:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:58 vm09 ceph-mon[54524]: purged_snaps scrub starts 2026-03-09T20:18:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:58 vm09 ceph-mon[54524]: purged_snaps scrub ok 2026-03-09T20:18:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2053054613' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:18:59.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:59 vm05 ceph-mon[51870]: osdmap e9: 2 total, 1 up, 2 in 2026-03-09T20:18:59.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:59 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:18:59.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:18:59 vm05 ceph-mon[51870]: pgmap v20: 0 pgs: ; 0 B data, 218 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:18:59.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:59 vm05 ceph-mon[61345]: osdmap e9: 2 total, 1 up, 2 in 2026-03-09T20:18:59.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:59 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:18:59.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:18:59 vm05 ceph-mon[61345]: pgmap v20: 0 pgs: ; 0 B data, 218 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:00.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:59 vm09 ceph-mon[54524]: osdmap e9: 2 total, 1 up, 2 in 2026-03-09T20:19:00.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:59 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:00.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:18:59 vm09 ceph-mon[54524]: pgmap v20: 0 pgs: ; 0 B data, 218 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:01.731 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:01 vm05 ceph-mon[61345]: pgmap v21: 0 pgs: ; 0 B data, 218 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:01.731 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:01 vm05 ceph-mon[51870]: pgmap v21: 0 pgs: ; 0 B data, 218 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:01 vm09 ceph-mon[54524]: pgmap v21: 0 pgs: ; 0 B data, 218 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:02.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:02 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:19:02.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:02 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:02.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:02 vm05 ceph-mon[61345]: Deploying daemon osd.1 on vm05 2026-03-09T20:19:02.849 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:02 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:19:02.849 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:02 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:02.849 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:02 vm05 ceph-mon[51870]: Deploying daemon osd.1 on vm05 2026-03-09T20:19:03.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:02 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:19:03.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:02 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:03.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:02 vm09 ceph-mon[54524]: Deploying daemon osd.1 on vm05 2026-03-09T20:19:03.644 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:03 vm05 ceph-mon[61345]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:03.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:03 vm05 ceph-mon[51870]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:04.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:03 vm09 ceph-mon[54524]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:04.736 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:04 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:04.736 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:04 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:04.736 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:04 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:04.736 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:04 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:04.736 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:04 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:04.736 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:04 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:05.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:04 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:05.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:04 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:05.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:04 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:05.729 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 1 on host 'vm05' 2026-03-09T20:19:05.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:05.729+0000 7f8f537fe640 1 -- 192.168.123.105:0/2969955850 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 2847272575) 0x7f8f34002bf0 con 0x7f8f440780d0 2026-03-09T20:19:05.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:05.733+0000 7f8f6f577640 1 -- 192.168.123.105:0/2969955850 >> v1:192.168.123.105:6800/3290461294 conn(0x7f8f440780d0 legacy=0x7f8f4407a590 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:05.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:05.733+0000 7f8f6f577640 1 -- 192.168.123.105:0/2969955850 >> v1:192.168.123.105:6790/0 conn(0x7f8f70077aa0 legacy=0x7f8f701a0140 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:05.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:05.733+0000 7f8f6f577640 1 -- 192.168.123.105:0/2969955850 shutdown_connections 2026-03-09T20:19:05.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:05.733+0000 7f8f6f577640 1 -- 192.168.123.105:0/2969955850 >> 192.168.123.105:0/2969955850 conn(0x7f8f700fff30 msgr2=0x7f8f70103430 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:19:05.733 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:05.733+0000 7f8f6f577640 1 -- 192.168.123.105:0/2969955850 shutdown_connections 2026-03-09T20:19:05.733 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:05.733+0000 7f8f6f577640 1 -- 192.168.123.105:0/2969955850 wait complete. 2026-03-09T20:19:05.899 DEBUG:teuthology.orchestra.run.vm05:osd.1> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.1.service 2026-03-09T20:19:05.900 INFO:tasks.cephadm:Deploying osd.2 on vm05 with /dev/vdc... 2026-03-09T20:19:05.900 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- lvm zap /dev/vdc 2026-03-09T20:19:06.222 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:19:06.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[51870]: pgmap v23: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[51870]: from='osd.1 v1:192.168.123.105:6805/3664200689' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[61345]: pgmap v23: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:06 vm05 ceph-mon[61345]: from='osd.1 v1:192.168.123.105:6805/3664200689' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:19:06.410 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:19:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1[70325]: 2026-03-09T20:19:06.149+0000 7f091d382740 -1 osd.1 0 log_to_monitors true 2026-03-09T20:19:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:06 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:06 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:06 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:06 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:06 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:06 vm09 ceph-mon[54524]: pgmap v23: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:06 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:06 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:06 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:06 vm09 ceph-mon[54524]: from='osd.1 v1:192.168.123.105:6805/3664200689' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:19:07.773 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:19:07.791 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch daemon add osd vm05:/dev/vdc 2026-03-09T20:19:07.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[61345]: from='osd.1 v1:192.168.123.105:6805/3664200689' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T20:19:07.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[61345]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T20:19:07.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[61345]: from='osd.1 v1:192.168.123.105:6805/3664200689' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:07.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:07.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[61345]: Detected new or changed devices on vm05 2026-03-09T20:19:07.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:07.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:07.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:07.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:07.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:07.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:07.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[61345]: pgmap v25: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:07.816 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:19:07 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1[70325]: 2026-03-09T20:19:07.741+0000 7f0919b16640 -1 osd.1 0 waiting for initial osdmap 2026-03-09T20:19:07.816 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:19:07 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1[70325]: 2026-03-09T20:19:07.750+0000 7f091512d640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:19:07.816 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[51870]: from='osd.1 v1:192.168.123.105:6805/3664200689' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T20:19:07.816 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[51870]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T20:19:07.816 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[51870]: from='osd.1 v1:192.168.123.105:6805/3664200689' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:07.816 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:07.816 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[51870]: Detected new or changed devices on vm05 2026-03-09T20:19:07.816 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:07.816 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:07.816 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:07.816 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:07.816 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:07.816 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:07.816 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:07 vm05 ceph-mon[51870]: pgmap v25: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:07.970 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:19:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:07 vm09 ceph-mon[54524]: from='osd.1 v1:192.168.123.105:6805/3664200689' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T20:19:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:07 vm09 ceph-mon[54524]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T20:19:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:07 vm09 ceph-mon[54524]: from='osd.1 v1:192.168.123.105:6805/3664200689' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:07 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:07 vm09 ceph-mon[54524]: Detected new or changed devices on vm05 2026-03-09T20:19:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:07 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:07 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:07 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:07 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:07 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:07 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:07 vm09 ceph-mon[54524]: pgmap v25: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:08.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.138+0000 7f5e96ed2640 1 -- 192.168.123.105:0/1341233063 >> v1:192.168.123.105:6790/0 conn(0x7f5e901047a0 legacy=0x7f5e90104ba0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:08.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.139+0000 7f5e96ed2640 1 -- 192.168.123.105:0/1341233063 shutdown_connections 2026-03-09T20:19:08.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.139+0000 7f5e96ed2640 1 -- 192.168.123.105:0/1341233063 >> 192.168.123.105:0/1341233063 conn(0x7f5e900fff30 msgr2=0x7f5e90102370 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:19:08.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.139+0000 7f5e96ed2640 1 -- 192.168.123.105:0/1341233063 shutdown_connections 2026-03-09T20:19:08.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.139+0000 7f5e96ed2640 1 -- 192.168.123.105:0/1341233063 wait complete. 2026-03-09T20:19:08.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.140+0000 7f5e96ed2640 1 Processor -- start 2026-03-09T20:19:08.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.140+0000 7f5e96ed2640 1 -- start start 2026-03-09T20:19:08.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.140+0000 7f5e96ed2640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5e9019c860 con 0x7f5e901047a0 2026-03-09T20:19:08.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.140+0000 7f5e96ed2640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5e901a8010 con 0x7f5e9010c8e0 2026-03-09T20:19:08.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.140+0000 7f5e96ed2640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5e901a91f0 con 0x7f5e90108bd0 2026-03-09T20:19:08.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.140+0000 7f5e956cf640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f5e90108bd0 0x7f5e901a21b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:41528/0 (socket says 192.168.123.105:41528) 2026-03-09T20:19:08.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.140+0000 7f5e956cf640 1 -- 192.168.123.105:0/800295143 learned_addr learned my addr 192.168.123.105:0/800295143 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:19:08.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.141+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 732934936 0 0) 0x7f5e901a91f0 con 0x7f5e90108bd0 2026-03-09T20:19:08.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.141+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5e60003620 con 0x7f5e90108bd0 2026-03-09T20:19:08.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.141+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 271846693 0 0) 0x7f5e9019c860 con 0x7f5e901047a0 2026-03-09T20:19:08.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.141+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5e901a91f0 con 0x7f5e901047a0 2026-03-09T20:19:08.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.141+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1885371113 0 0) 0x7f5e901a8010 con 0x7f5e9010c8e0 2026-03-09T20:19:08.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.141+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5e9019c860 con 0x7f5e9010c8e0 2026-03-09T20:19:08.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.141+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3679508908 0 0) 0x7f5e60003620 con 0x7f5e90108bd0 2026-03-09T20:19:08.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f5e901a8010 con 0x7f5e90108bd0 2026-03-09T20:19:08.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2435037736 0 0) 0x7f5e9019c860 con 0x7f5e9010c8e0 2026-03-09T20:19:08.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f5e60003620 con 0x7f5e9010c8e0 2026-03-09T20:19:08.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3003953888 0 0) 0x7f5e901a91f0 con 0x7f5e901047a0 2026-03-09T20:19:08.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f5e9019c860 con 0x7f5e901047a0 2026-03-09T20:19:08.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f5e80003120 con 0x7f5e90108bd0 2026-03-09T20:19:08.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f5e8c003440 con 0x7f5e9010c8e0 2026-03-09T20:19:08.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f5e780042d0 con 0x7f5e901047a0 2026-03-09T20:19:08.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 832546501 0 0) 0x7f5e9019c860 con 0x7f5e901047a0 2026-03-09T20:19:08.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 >> v1:192.168.123.105:6790/0 conn(0x7f5e90108bd0 legacy=0x7f5e901a21b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:08.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 >> v1:192.168.123.109:6789/0 conn(0x7f5e9010c8e0 legacy=0x7f5e901a58e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:08.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5e901aa3d0 con 0x7f5e901047a0 2026-03-09T20:19:08.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e96ed2640 1 -- 192.168.123.105:0/800295143 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f5e901a8240 con 0x7f5e901047a0 2026-03-09T20:19:08.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e96ed2640 1 -- 192.168.123.105:0/800295143 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f5e901a8770 con 0x7f5e901047a0 2026-03-09T20:19:08.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f5e780033c0 con 0x7f5e901047a0 2026-03-09T20:19:08.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.142+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f5e78004940 con 0x7f5e901047a0 2026-03-09T20:19:08.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.143+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 14) ==== 99944+0+0 (unknown 1147942814 0 0) 0x7f5e78005c10 con 0x7f5e901047a0 2026-03-09T20:19:08.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.144+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(11..11 src has 1..11) ==== 1683+0+0 (unknown 3326147173 0 0) 0x7f5e78092790 con 0x7f5e901047a0 2026-03-09T20:19:08.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.144+0000 7f5e96ed2640 1 -- 192.168.123.105:0/800295143 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5e58005180 con 0x7f5e901047a0 2026-03-09T20:19:08.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.147+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f5e7805c9d0 con 0x7f5e901047a0 2026-03-09T20:19:08.248 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:08.247+0000 7f5e96ed2640 1 -- 192.168.123.105:0/800295143 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}) -- 0x7f5e58002bf0 con 0x7f5e600786a0 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[61345]: from='osd.1 v1:192.168.123.105:6805/3664200689' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[61345]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[61345]: from='client.14268 v1:192.168.123.105:0/800295143' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[51870]: from='osd.1 v1:192.168.123.105:6805/3664200689' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[51870]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[51870]: from='client.14268 v1:192.168.123.105:0/800295143' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:08.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:09.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:08 vm09 ceph-mon[54524]: from='osd.1 v1:192.168.123.105:6805/3664200689' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T20:19:09.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:08 vm09 ceph-mon[54524]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T20:19:09.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:09.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:09.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:08 vm09 ceph-mon[54524]: from='client.14268 v1:192.168.123.105:0/800295143' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:09.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:09.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:09.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[51870]: purged_snaps scrub starts 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[51870]: purged_snaps scrub ok 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[51870]: osd.1 v1:192.168.123.105:6805/3664200689 boot 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[51870]: osdmap e12: 2 total, 2 up, 2 in 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3205005026' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58868a45-388a-4244-bde9-e525f4e2b7d5"}]: dispatch 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3205005026' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "58868a45-388a-4244-bde9-e525f4e2b7d5"}]': finished 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[51870]: osdmap e13: 3 total, 2 up, 3 in 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[51870]: pgmap v29: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2336037580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[61345]: purged_snaps scrub starts 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[61345]: purged_snaps scrub ok 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[61345]: osd.1 v1:192.168.123.105:6805/3664200689 boot 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[61345]: osdmap e12: 2 total, 2 up, 2 in 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3205005026' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58868a45-388a-4244-bde9-e525f4e2b7d5"}]: dispatch 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3205005026' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "58868a45-388a-4244-bde9-e525f4e2b7d5"}]': finished 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[61345]: osdmap e13: 3 total, 2 up, 3 in 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[61345]: pgmap v29: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2336037580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:10.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:09 vm09 ceph-mon[54524]: purged_snaps scrub starts 2026-03-09T20:19:10.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:09 vm09 ceph-mon[54524]: purged_snaps scrub ok 2026-03-09T20:19:10.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:09 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:10.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:09 vm09 ceph-mon[54524]: osd.1 v1:192.168.123.105:6805/3664200689 boot 2026-03-09T20:19:10.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:09 vm09 ceph-mon[54524]: osdmap e12: 2 total, 2 up, 2 in 2026-03-09T20:19:10.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:09 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:19:10.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3205005026' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58868a45-388a-4244-bde9-e525f4e2b7d5"}]: dispatch 2026-03-09T20:19:10.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3205005026' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "58868a45-388a-4244-bde9-e525f4e2b7d5"}]': finished 2026-03-09T20:19:10.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:09 vm09 ceph-mon[54524]: osdmap e13: 3 total, 2 up, 3 in 2026-03-09T20:19:10.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:09 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:10.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:09 vm09 ceph-mon[54524]: pgmap v29: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:19:10.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2336037580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:11 vm09 ceph-mon[54524]: osdmap e14: 3 total, 2 up, 3 in 2026-03-09T20:19:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:11 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:11.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:11 vm05 ceph-mon[51870]: osdmap e14: 3 total, 2 up, 3 in 2026-03-09T20:19:11.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:11 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:11 vm05 ceph-mon[61345]: osdmap e14: 3 total, 2 up, 3 in 2026-03-09T20:19:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:11 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:12.382 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:12 vm05 ceph-mon[51870]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:12.382 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:12 vm05 ceph-mon[61345]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:12.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:12 vm09 ceph-mon[54524]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:13.620 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:13 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:19:13.620 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:13 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:13.621 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:13 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:19:13.621 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:13 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:13.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:13 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:19:13.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:13 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:14.724 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:14 vm05 ceph-mon[61345]: Deploying daemon osd.2 on vm05 2026-03-09T20:19:14.724 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:14 vm05 ceph-mon[61345]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:14.725 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:14 vm05 ceph-mon[51870]: Deploying daemon osd.2 on vm05 2026-03-09T20:19:14.725 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:14 vm05 ceph-mon[51870]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:14.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:14 vm09 ceph-mon[54524]: Deploying daemon osd.2 on vm05 2026-03-09T20:19:14.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:14 vm09 ceph-mon[54524]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:16.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[51870]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:16.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:16.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:16.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:16.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:16.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:16.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:16.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:16.595 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:16.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[61345]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:16.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:16.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:16.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:16.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:16.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:16.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:16.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:16.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:16 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:16 vm09 ceph-mon[54524]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:16 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:16 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:16 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:16 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:16 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:16 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:16 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:16 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:17.081 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 2 on host 'vm05' 2026-03-09T20:19:17.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:17.080+0000 7f5e86ffd640 1 -- 192.168.123.105:0/800295143 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 1234733726) 0x7f5e58002bf0 con 0x7f5e600786a0 2026-03-09T20:19:17.083 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:17.083+0000 7f5e96ed2640 1 -- 192.168.123.105:0/800295143 >> v1:192.168.123.105:6800/3290461294 conn(0x7f5e600786a0 legacy=0x7f5e6007ab60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:17.083 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:17.083+0000 7f5e96ed2640 1 -- 192.168.123.105:0/800295143 >> v1:192.168.123.105:6789/0 conn(0x7f5e901047a0 legacy=0x7f5e9019bce0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:17.083 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:17.083+0000 7f5e96ed2640 1 -- 192.168.123.105:0/800295143 shutdown_connections 2026-03-09T20:19:17.083 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:17.083+0000 7f5e96ed2640 1 -- 192.168.123.105:0/800295143 >> 192.168.123.105:0/800295143 conn(0x7f5e900fff30 msgr2=0x7f5e90109010 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:19:17.083 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:17.083+0000 7f5e96ed2640 1 -- 192.168.123.105:0/800295143 shutdown_connections 2026-03-09T20:19:17.083 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:17.083+0000 7f5e96ed2640 1 -- 192.168.123.105:0/800295143 wait complete. 2026-03-09T20:19:17.230 DEBUG:teuthology.orchestra.run.vm05:osd.2> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.2.service 2026-03-09T20:19:17.273 INFO:tasks.cephadm:Deploying osd.3 on vm05 with /dev/vdb... 2026-03-09T20:19:17.273 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- lvm zap /dev/vdb 2026-03-09T20:19:17.532 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:19:17 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2[75948]: 2026-03-09T20:19:17.434+0000 7f25c05a5740 -1 osd.2 0 log_to_monitors true 2026-03-09T20:19:17.579 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:19:17.808 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:17.808 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:17.808 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:17.808 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:17 vm05 ceph-mon[51870]: from='osd.2 v1:192.168.123.105:6809/1060255430' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:19:17.808 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:17 vm05 ceph-mon[51870]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:19:17.808 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:17 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:17.808 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:17 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:17.808 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:17 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:17.808 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:17 vm05 ceph-mon[61345]: from='osd.2 v1:192.168.123.105:6809/1060255430' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:19:17.808 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:17 vm05 ceph-mon[61345]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:19:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:17 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:17 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:17 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:17 vm09 ceph-mon[54524]: from='osd.2 v1:192.168.123.105:6809/1060255430' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:19:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:17 vm09 ceph-mon[54524]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[51870]: pgmap v34: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[51870]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[51870]: osdmap e15: 3 total, 2 up, 3 in 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[51870]: from='osd.2 v1:192.168.123.105:6809/1060255430' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[51870]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[61345]: pgmap v34: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[61345]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[61345]: osdmap e15: 3 total, 2 up, 3 in 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[61345]: from='osd.2 v1:192.168.123.105:6809/1060255430' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[61345]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:18.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:18 vm09 ceph-mon[54524]: pgmap v34: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:18 vm09 ceph-mon[54524]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T20:19:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:18 vm09 ceph-mon[54524]: osdmap e15: 3 total, 2 up, 3 in 2026-03-09T20:19:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:18 vm09 ceph-mon[54524]: from='osd.2 v1:192.168.123.105:6809/1060255430' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:18 vm09 ceph-mon[54524]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:19.097 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:19:19.117 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch daemon add osd vm05:/dev/vdb 2026-03-09T20:19:19.159 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:19:19 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2[75948]: 2026-03-09T20:19:19.093+0000 7f25bcd39640 -1 osd.2 0 waiting for initial osdmap 2026-03-09T20:19:19.159 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:19:19 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2[75948]: 2026-03-09T20:19:19.107+0000 7f25b8350640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:19:19.290 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:19:19.424 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.424+0000 7fd9e3fff640 1 -- 192.168.123.105:0/3666574860 >> v1:192.168.123.109:6789/0 conn(0x7fd9e4108bd0 legacy=0x7fd9e410b020 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:19.424 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.425+0000 7fd9e3fff640 1 -- 192.168.123.105:0/3666574860 shutdown_connections 2026-03-09T20:19:19.424 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.425+0000 7fd9e3fff640 1 -- 192.168.123.105:0/3666574860 >> 192.168.123.105:0/3666574860 conn(0x7fd9e40fff30 msgr2=0x7fd9e4102370 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:19:19.424 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.425+0000 7fd9e3fff640 1 -- 192.168.123.105:0/3666574860 shutdown_connections 2026-03-09T20:19:19.425 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.425+0000 7fd9e3fff640 1 -- 192.168.123.105:0/3666574860 wait complete. 2026-03-09T20:19:19.425 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.426+0000 7fd9e3fff640 1 Processor -- start 2026-03-09T20:19:19.425 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.426+0000 7fd9e3fff640 1 -- start start 2026-03-09T20:19:19.426 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.426+0000 7fd9e3fff640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd9e419c700 con 0x7fd9e41047a0 2026-03-09T20:19:19.426 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.427+0000 7fd9e37fe640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7fd9e410c8e0 0x7fd9e41a5790 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:40494/0 (socket says 192.168.123.105:40494) 2026-03-09T20:19:19.426 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.427+0000 7fd9e37fe640 1 -- 192.168.123.105:0/280998892 learned_addr learned my addr 192.168.123.105:0/280998892 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:19:19.426 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.427+0000 7fd9e3fff640 1 -- 192.168.123.105:0/280998892 --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd9e41a7ec0 con 0x7fd9e410c8e0 2026-03-09T20:19:19.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.427+0000 7fd9e3fff640 1 -- 192.168.123.105:0/280998892 --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd9e41a90a0 con 0x7fd9e4108bd0 2026-03-09T20:19:19.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.428+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1046712530 0 0) 0x7fd9e41a7ec0 con 0x7fd9e410c8e0 2026-03-09T20:19:19.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.428+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd9b8003620 con 0x7fd9e410c8e0 2026-03-09T20:19:19.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.428+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2629729355 0 0) 0x7fd9e419c700 con 0x7fd9e41047a0 2026-03-09T20:19:19.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.428+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd9e41a7ec0 con 0x7fd9e41047a0 2026-03-09T20:19:19.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.428+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2752589707 0 0) 0x7fd9e41a90a0 con 0x7fd9e4108bd0 2026-03-09T20:19:19.428 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.428+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd9e419c700 con 0x7fd9e4108bd0 2026-03-09T20:19:19.428 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.428+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 541354498 0 0) 0x7fd9b8003620 con 0x7fd9e410c8e0 2026-03-09T20:19:19.428 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.428+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd9e41a90a0 con 0x7fd9e410c8e0 2026-03-09T20:19:19.428 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.428+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fd9d40034e0 con 0x7fd9e410c8e0 2026-03-09T20:19:19.428 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.429+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1294724958 0 0) 0x7fd9e41a7ec0 con 0x7fd9e41047a0 2026-03-09T20:19:19.428 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.429+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd9b8003620 con 0x7fd9e41047a0 2026-03-09T20:19:19.428 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.429+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fd9cc002f70 con 0x7fd9e41047a0 2026-03-09T20:19:19.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.429+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2689885866 0 0) 0x7fd9b8003620 con 0x7fd9e41047a0 2026-03-09T20:19:19.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.429+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 >> v1:192.168.123.105:6790/0 conn(0x7fd9e4108bd0 legacy=0x7fd9e41a2060 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:19.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.429+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 >> v1:192.168.123.109:6789/0 conn(0x7fd9e410c8e0 legacy=0x7fd9e41a5790 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:19.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.429+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd9e41aa280 con 0x7fd9e41047a0 2026-03-09T20:19:19.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.430+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fd9cc0044d0 con 0x7fd9e41047a0 2026-03-09T20:19:19.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.430+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fd9cc004900 con 0x7fd9e41047a0 2026-03-09T20:19:19.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.430+0000 7fd9e3fff640 1 -- 192.168.123.105:0/280998892 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fd9e41a6f70 con 0x7fd9e41047a0 2026-03-09T20:19:19.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.430+0000 7fd9e3fff640 1 -- 192.168.123.105:0/280998892 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fd9e41a7550 con 0x7fd9e41047a0 2026-03-09T20:19:19.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.431+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 14) ==== 99944+0+0 (unknown 1147942814 0 0) 0x7fd9cc01cf50 con 0x7fd9e41047a0 2026-03-09T20:19:19.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.431+0000 7fd9e3fff640 1 -- 192.168.123.105:0/280998892 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd9a8005180 con 0x7fd9e41047a0 2026-03-09T20:19:19.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.435+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(16..16 src has 1..16) ==== 1975+0+0 (unknown 812681438 0 0) 0x7fd9cc092940 con 0x7fd9e41047a0 2026-03-09T20:19:19.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.435+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fd9cc05ca60 con 0x7fd9e41047a0 2026-03-09T20:19:19.537 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:19.538+0000 7fd9e3fff640 1 -- 192.168.123.105:0/280998892 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}) -- 0x7fd9a8002bf0 con 0x7fd9b80809d0 2026-03-09T20:19:19.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[51870]: Detected new or changed devices on vm05 2026-03-09T20:19:19.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[51870]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[51870]: osdmap e16: 3 total, 2 up, 3 in 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[61345]: Detected new or changed devices on vm05 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[61345]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[61345]: osdmap e16: 3 total, 2 up, 3 in 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:19 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:19 vm09 ceph-mon[54524]: Detected new or changed devices on vm05 2026-03-09T20:19:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:19 vm09 ceph-mon[54524]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T20:19:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:19 vm09 ceph-mon[54524]: osdmap e16: 3 total, 2 up, 3 in 2026-03-09T20:19:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:19 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:19 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:19 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:19 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:19 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[51870]: purged_snaps scrub starts 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[51870]: purged_snaps scrub ok 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[51870]: pgmap v37: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[51870]: from='client.14295 v1:192.168.123.105:0/280998892' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[51870]: osd.2 v1:192.168.123.105:6809/1060255430 boot 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[51870]: osdmap e17: 3 total, 3 up, 3 in 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/46783893' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4c40929b-9b22-486e-aed2-a111cbaa96da"}]: dispatch 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[51870]: from='client.24193 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4c40929b-9b22-486e-aed2-a111cbaa96da"}]: dispatch 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[51870]: from='client.24193 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4c40929b-9b22-486e-aed2-a111cbaa96da"}]': finished 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[51870]: osdmap e18: 4 total, 3 up, 4 in 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[61345]: purged_snaps scrub starts 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[61345]: purged_snaps scrub ok 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[61345]: pgmap v37: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:20.778 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[61345]: from='client.14295 v1:192.168.123.105:0/280998892' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:20.779 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[61345]: osd.2 v1:192.168.123.105:6809/1060255430 boot 2026-03-09T20:19:20.779 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[61345]: osdmap e17: 3 total, 3 up, 3 in 2026-03-09T20:19:20.779 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:20.779 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/46783893' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4c40929b-9b22-486e-aed2-a111cbaa96da"}]: dispatch 2026-03-09T20:19:20.779 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[61345]: from='client.24193 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4c40929b-9b22-486e-aed2-a111cbaa96da"}]: dispatch 2026-03-09T20:19:20.779 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[61345]: from='client.24193 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4c40929b-9b22-486e-aed2-a111cbaa96da"}]': finished 2026-03-09T20:19:20.779 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[61345]: osdmap e18: 4 total, 3 up, 4 in 2026-03-09T20:19:20.779 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:20 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:20 vm09 ceph-mon[54524]: purged_snaps scrub starts 2026-03-09T20:19:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:20 vm09 ceph-mon[54524]: purged_snaps scrub ok 2026-03-09T20:19:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:20 vm09 ceph-mon[54524]: pgmap v37: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:19:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:20 vm09 ceph-mon[54524]: from='client.14295 v1:192.168.123.105:0/280998892' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:20 vm09 ceph-mon[54524]: osd.2 v1:192.168.123.105:6809/1060255430 boot 2026-03-09T20:19:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:20 vm09 ceph-mon[54524]: osdmap e17: 3 total, 3 up, 3 in 2026-03-09T20:19:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:20 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:19:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:20 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/46783893' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4c40929b-9b22-486e-aed2-a111cbaa96da"}]: dispatch 2026-03-09T20:19:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:20 vm09 ceph-mon[54524]: from='client.24193 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4c40929b-9b22-486e-aed2-a111cbaa96da"}]: dispatch 2026-03-09T20:19:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:20 vm09 ceph-mon[54524]: from='client.24193 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4c40929b-9b22-486e-aed2-a111cbaa96da"}]': finished 2026-03-09T20:19:21.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:20 vm09 ceph-mon[54524]: osdmap e18: 4 total, 3 up, 4 in 2026-03-09T20:19:21.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:20 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:22.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1033198841' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:22.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:21 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:19:22.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1033198841' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:22.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:21 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:19:22.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1033198841' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:22.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:21 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:19:23.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:22 vm05 ceph-mon[61345]: pgmap v40: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:23.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:22 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T20:19:23.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:22 vm05 ceph-mon[61345]: osdmap e19: 4 total, 3 up, 4 in 2026-03-09T20:19:23.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:22 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:23.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:22 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:19:23.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 sudo[80720]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-09T20:19:23.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 sudo[80720]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T20:19:23.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 sudo[80720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T20:19:23.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 sudo[80720]: pam_unix(sudo:session): session closed for user root 2026-03-09T20:19:23.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:22 vm05 ceph-mon[51870]: pgmap v40: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:23.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:22 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T20:19:23.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:22 vm05 ceph-mon[51870]: osdmap e19: 4 total, 3 up, 4 in 2026-03-09T20:19:23.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:22 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:23.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:22 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:19:23.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80716]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-09T20:19:23.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80716]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T20:19:23.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80716]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T20:19:23.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80716]: pam_unix(sudo:session): session closed for user root 2026-03-09T20:19:23.160 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80708]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdd 2026-03-09T20:19:23.160 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80708]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T20:19:23.160 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80708]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T20:19:23.160 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80708]: pam_unix(sudo:session): session closed for user root 2026-03-09T20:19:23.160 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80712]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdc 2026-03-09T20:19:23.160 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80712]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T20:19:23.160 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80712]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T20:19:23.160 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80712]: pam_unix(sudo:session): session closed for user root 2026-03-09T20:19:23.160 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80704]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-09T20:19:23.160 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80704]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T20:19:23.160 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80704]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T20:19:23.160 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:19:22 vm05 sudo[80704]: pam_unix(sudo:session): session closed for user root 2026-03-09T20:19:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:22 vm09 ceph-mon[54524]: pgmap v40: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:22 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T20:19:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:22 vm09 ceph-mon[54524]: osdmap e19: 4 total, 3 up, 4 in 2026-03-09T20:19:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:22 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:22 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:19:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 sudo[56516]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-09T20:19:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 sudo[56516]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T20:19:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 sudo[56516]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T20:19:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 sudo[56516]: pam_unix(sudo:session): session closed for user root 2026-03-09T20:19:23.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:23.899+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7fd9cc0579f0 con 0x7fd9e41047a0 2026-03-09T20:19:23.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:19:23.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: osdmap e20: 4 total, 3 up, 4 in 2026-03-09T20:19:23.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:23.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:23.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:23.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:23.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:19:23.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:19:23.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[51870]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: osdmap e20: 4 total, 3 up, 4 in 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:19:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:23 vm05 ceph-mon[61345]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:24.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:19:24.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: osdmap e20: 4 total, 3 up, 4 in 2026-03-09T20:19:24.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:24.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:24.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:24.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:24.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:19:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:19:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:19:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:19:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:19:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:19:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:23 vm09 ceph-mon[54524]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:25.145 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:24 vm05 ceph-mon[61345]: mgrmap e15: y(active, since 66s), standbys: x 2026-03-09T20:19:25.146 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:24 vm05 ceph-mon[61345]: osdmap e21: 4 total, 3 up, 4 in 2026-03-09T20:19:25.146 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:25.146 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T20:19:25.146 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:25.146 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:24 vm05 ceph-mon[51870]: mgrmap e15: y(active, since 66s), standbys: x 2026-03-09T20:19:25.146 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:24 vm05 ceph-mon[51870]: osdmap e21: 4 total, 3 up, 4 in 2026-03-09T20:19:25.146 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:25.146 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T20:19:25.146 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:25.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:24 vm09 ceph-mon[54524]: mgrmap e15: y(active, since 66s), standbys: x 2026-03-09T20:19:25.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:24 vm09 ceph-mon[54524]: osdmap e21: 4 total, 3 up, 4 in 2026-03-09T20:19:25.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:25.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T20:19:25.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:25 vm05 ceph-mon[51870]: Deploying daemon osd.3 on vm05 2026-03-09T20:19:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:25 vm05 ceph-mon[51870]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:25 vm05 ceph-mon[61345]: Deploying daemon osd.3 on vm05 2026-03-09T20:19:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:25 vm05 ceph-mon[61345]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:26.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:25 vm09 ceph-mon[54524]: Deploying daemon osd.3 on vm05 2026-03-09T20:19:26.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:25 vm09 ceph-mon[54524]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:27.390 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:27 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:27.390 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:27 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:27.390 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:27 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:27.390 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:27 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:27.390 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:27 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:27.390 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:27 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:27.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:27 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:27.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:27 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:27.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:27 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:28 vm05 ceph-mon[51870]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:28 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:28 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:28 vm05 ceph-mon[61345]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:28 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:28 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:28.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:28 vm09 ceph-mon[54524]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:28.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:28 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:28.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:28 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:28.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:28.557+0000 7fd9c3fff640 1 -- 192.168.123.105:0/280998892 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 377247425) 0x7fd9a8002bf0 con 0x7fd9b80809d0 2026-03-09T20:19:28.560 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 3 on host 'vm05' 2026-03-09T20:19:28.561 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:28.561+0000 7fd9e3fff640 1 -- 192.168.123.105:0/280998892 >> v1:192.168.123.105:6800/3290461294 conn(0x7fd9b80809d0 legacy=0x7fd9b8082e90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:28.561 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:28.561+0000 7fd9e3fff640 1 -- 192.168.123.105:0/280998892 >> v1:192.168.123.105:6789/0 conn(0x7fd9e41047a0 legacy=0x7fd9e419bb80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:28.563 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:28.564+0000 7fd9e3fff640 1 -- 192.168.123.105:0/280998892 shutdown_connections 2026-03-09T20:19:28.563 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:28.564+0000 7fd9e3fff640 1 -- 192.168.123.105:0/280998892 >> 192.168.123.105:0/280998892 conn(0x7fd9e40fff30 msgr2=0x7fd9e410b020 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:19:28.563 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:28.564+0000 7fd9e3fff640 1 -- 192.168.123.105:0/280998892 shutdown_connections 2026-03-09T20:19:28.564 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:19:28.564+0000 7fd9e3fff640 1 -- 192.168.123.105:0/280998892 wait complete. 2026-03-09T20:19:28.740 DEBUG:teuthology.orchestra.run.vm05:osd.3> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.3.service 2026-03-09T20:19:28.782 INFO:tasks.cephadm:Deploying osd.4 on vm09 with /dev/vde... 2026-03-09T20:19:28.782 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- lvm zap /dev/vde 2026-03-09T20:19:28.964 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:19:29.312 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:29 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:29.313 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:29 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:29.313 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:29 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:29.313 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:29 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:29.313 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:29 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:29.313 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:29 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:29.313 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:29 vm09 ceph-mon[54524]: from='osd.3 v1:192.168.123.105:6813/4176641888' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:19:29.313 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:29 vm09 ceph-mon[54524]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[51870]: from='osd.3 v1:192.168.123.105:6813/4176641888' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[51870]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[61345]: from='osd.3 v1:192.168.123.105:6813/4176641888' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:19:29.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:29 vm05 ceph-mon[61345]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T20:19:30.086 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:19:30.102 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch daemon add osd vm09:/dev/vde 2026-03-09T20:19:30.285 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:19:30.378 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:30 vm09 ceph-mon[54524]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:30.378 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:30 vm09 ceph-mon[54524]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T20:19:30.378 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:30 vm09 ceph-mon[54524]: osdmap e22: 4 total, 3 up, 4 in 2026-03-09T20:19:30.379 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:30 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:30.379 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:30 vm09 ceph-mon[54524]: from='osd.3 v1:192.168.123.105:6813/4176641888' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:30.379 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:30 vm09 ceph-mon[54524]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:30.379 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:30 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:30.379 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:30 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:30.379 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:30 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:30.379 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:30 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:30.379 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:30 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:30.379 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:30 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:30.433 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.431+0000 7ff2f3270640 1 -- 192.168.123.109:0/4144399927 >> v1:192.168.123.105:6789/0 conn(0x7ff2ec102730 legacy=0x7ff2ec102b30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:30.433 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.432+0000 7ff2f3270640 1 -- 192.168.123.109:0/4144399927 shutdown_connections 2026-03-09T20:19:30.433 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.432+0000 7ff2f3270640 1 -- 192.168.123.109:0/4144399927 >> 192.168.123.109:0/4144399927 conn(0x7ff2ec0fdec0 msgr2=0x7ff2ec100300 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:19:30.433 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.432+0000 7ff2f3270640 1 -- 192.168.123.109:0/4144399927 shutdown_connections 2026-03-09T20:19:30.433 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.433+0000 7ff2f3270640 1 -- 192.168.123.109:0/4144399927 wait complete. 2026-03-09T20:19:30.433 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.433+0000 7ff2f3270640 1 Processor -- start 2026-03-09T20:19:30.434 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.433+0000 7ff2f3270640 1 -- start start 2026-03-09T20:19:30.434 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.434+0000 7ff2f3270640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff2ec19a7f0 con 0x7ff2ec106b60 2026-03-09T20:19:30.434 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.434+0000 7ff2f3270640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff2ec1a5fb0 con 0x7ff2ec10a870 2026-03-09T20:19:30.434 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.434+0000 7ff2f3270640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff2ec1a7190 con 0x7ff2ec102730 2026-03-09T20:19:30.434 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.434+0000 7ff2e3fff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7ff2ec106b60 0x7ff2ec1a0150 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:41308/0 (socket says 192.168.123.109:41308) 2026-03-09T20:19:30.434 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.434+0000 7ff2e3fff640 1 -- 192.168.123.109:0/2316368902 learned_addr learned my addr 192.168.123.109:0/2316368902 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:19:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.434+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 770771963 0 0) 0x7ff2ec1a5fb0 con 0x7ff2ec10a870 2026-03-09T20:19:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.434+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff2bc003620 con 0x7ff2ec10a870 2026-03-09T20:19:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.435+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3791862679 0 0) 0x7ff2ec1a7190 con 0x7ff2ec102730 2026-03-09T20:19:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.435+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff2ec1a5fb0 con 0x7ff2ec102730 2026-03-09T20:19:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.435+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3571528595 0 0) 0x7ff2ec19a7f0 con 0x7ff2ec106b60 2026-03-09T20:19:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.435+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff2ec1a7190 con 0x7ff2ec106b60 2026-03-09T20:19:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.435+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3389719086 0 0) 0x7ff2bc003620 con 0x7ff2ec10a870 2026-03-09T20:19:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.435+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7ff2ec19a7f0 con 0x7ff2ec10a870 2026-03-09T20:19:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.435+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7ff2e8003350 con 0x7ff2ec10a870 2026-03-09T20:19:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.435+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2912538597 0 0) 0x7ff2ec19a7f0 con 0x7ff2ec10a870 2026-03-09T20:19:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.435+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 >> v1:192.168.123.105:6790/0 conn(0x7ff2ec102730 legacy=0x7ff2ec199c70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.435+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 >> v1:192.168.123.105:6789/0 conn(0x7ff2ec106b60 legacy=0x7ff2ec1a0150 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.435+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff2ec1a8370 con 0x7ff2ec10a870 2026-03-09T20:19:30.436 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.435+0000 7ff2f3270640 1 -- 192.168.123.109:0/2316368902 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7ff2ec1a61e0 con 0x7ff2ec10a870 2026-03-09T20:19:30.437 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.435+0000 7ff2f3270640 1 -- 192.168.123.109:0/2316368902 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7ff2ec1a6770 con 0x7ff2ec10a870 2026-03-09T20:19:30.437 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.436+0000 7ff2f3270640 1 -- 192.168.123.109:0/2316368902 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff2b8005180 con 0x7ff2ec10a870 2026-03-09T20:19:30.437 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.437+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7ff2e80048d0 con 0x7ff2ec10a870 2026-03-09T20:19:30.437 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.437+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7ff2e8004ce0 con 0x7ff2ec10a870 2026-03-09T20:19:30.441 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.437+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7ff2e801d630 con 0x7ff2ec10a870 2026-03-09T20:19:30.441 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.438+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(22..22 src has 1..22) ==== 2637+0+0 (unknown 1177444572 0 0) 0x7ff2e80932f0 con 0x7ff2ec10a870 2026-03-09T20:19:30.441 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.441+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7ff2e805d0a0 con 0x7ff2ec10a870 2026-03-09T20:19:30.549 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:30.548+0000 7ff2f3270640 1 -- 192.168.123.109:0/2316368902 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}) -- 0x7ff2b8002bf0 con 0x7ff2bc0809a0 2026-03-09T20:19:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[61345]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[61345]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T20:19:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[61345]: osdmap e22: 4 total, 3 up, 4 in 2026-03-09T20:19:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[61345]: from='osd.3 v1:192.168.123.105:6813/4176641888' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[61345]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[51870]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[51870]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T20:19:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[51870]: osdmap e22: 4 total, 3 up, 4 in 2026-03-09T20:19:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[51870]: from='osd.3 v1:192.168.123.105:6813/4176641888' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[51870]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T20:19:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:30 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:31.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:31 vm09 ceph-mon[54524]: Detected new or changed devices on vm05 2026-03-09T20:19:31.540 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:31 vm09 ceph-mon[54524]: from='client.24220 v1:192.168.123.109:0/2316368902' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:31.540 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:31 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:31.540 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:31 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:31.540 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:31 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:31.540 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:31 vm09 ceph-mon[54524]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T20:19:31.540 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:31 vm09 ceph-mon[54524]: osdmap e23: 4 total, 3 up, 4 in 2026-03-09T20:19:31.540 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:31 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:31.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[61345]: Detected new or changed devices on vm05 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[61345]: from='client.24220 v1:192.168.123.109:0/2316368902' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[61345]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[61345]: osdmap e23: 4 total, 3 up, 4 in 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[51870]: Detected new or changed devices on vm05 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[51870]: from='client.24220 v1:192.168.123.109:0/2316368902' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[51870]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[51870]: osdmap e23: 4 total, 3 up, 4 in 2026-03-09T20:19:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:31 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:32.506 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:32 vm09 ceph-mon[54524]: purged_snaps scrub starts 2026-03-09T20:19:32.506 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:32 vm09 ceph-mon[54524]: purged_snaps scrub ok 2026-03-09T20:19:32.506 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:32 vm09 ceph-mon[54524]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:32.506 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:32 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:32.506 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:32 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.109:0/2906042949' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "acddd4eb-0110-4992-a3c7-201ba9fd8f8e"}]: dispatch 2026-03-09T20:19:32.506 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:32 vm09 ceph-mon[54524]: from='client.24226 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "acddd4eb-0110-4992-a3c7-201ba9fd8f8e"}]: dispatch 2026-03-09T20:19:32.506 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:32 vm09 ceph-mon[54524]: from='client.24226 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "acddd4eb-0110-4992-a3c7-201ba9fd8f8e"}]': finished 2026-03-09T20:19:32.506 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:32 vm09 ceph-mon[54524]: osdmap e24: 5 total, 3 up, 5 in 2026-03-09T20:19:32.506 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:32 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:32.506 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:32 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:32.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[61345]: purged_snaps scrub starts 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[61345]: purged_snaps scrub ok 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[61345]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/2906042949' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "acddd4eb-0110-4992-a3c7-201ba9fd8f8e"}]: dispatch 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[61345]: from='client.24226 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "acddd4eb-0110-4992-a3c7-201ba9fd8f8e"}]: dispatch 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[61345]: from='client.24226 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "acddd4eb-0110-4992-a3c7-201ba9fd8f8e"}]': finished 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[61345]: osdmap e24: 5 total, 3 up, 5 in 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[51870]: purged_snaps scrub starts 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[51870]: purged_snaps scrub ok 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[51870]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/2906042949' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "acddd4eb-0110-4992-a3c7-201ba9fd8f8e"}]: dispatch 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[51870]: from='client.24226 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "acddd4eb-0110-4992-a3c7-201ba9fd8f8e"}]: dispatch 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[51870]: from='client.24226 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "acddd4eb-0110-4992-a3c7-201ba9fd8f8e"}]': finished 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[51870]: osdmap e24: 5 total, 3 up, 5 in 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:32.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:32 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:32.660 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:19:32 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3[81622]: 2026-03-09T20:19:32.395+0000 7f15fed15640 -1 osd.3 0 waiting for initial osdmap 2026-03-09T20:19:32.660 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:19:32 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3[81622]: 2026-03-09T20:19:32.406+0000 7f15fa33e640 -1 osd.3 24 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:19:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:33 vm09 ceph-mon[54524]: from='osd.3 ' entity='osd.3' 2026-03-09T20:19:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:33 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.109:0/17660147' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:33.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:33 vm05 ceph-mon[61345]: from='osd.3 ' entity='osd.3' 2026-03-09T20:19:33.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:33 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:33.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/17660147' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:33.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:33 vm05 ceph-mon[51870]: from='osd.3 ' entity='osd.3' 2026-03-09T20:19:33.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:33 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:33.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/17660147' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:34 vm09 ceph-mon[54524]: osd.3 v1:192.168.123.105:6813/4176641888 boot 2026-03-09T20:19:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:34 vm09 ceph-mon[54524]: osdmap e25: 5 total, 4 up, 5 in 2026-03-09T20:19:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:34 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:34 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:34 vm09 ceph-mon[54524]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:34 vm05 ceph-mon[61345]: osd.3 v1:192.168.123.105:6813/4176641888 boot 2026-03-09T20:19:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:34 vm05 ceph-mon[61345]: osdmap e25: 5 total, 4 up, 5 in 2026-03-09T20:19:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:34 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:34 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:34 vm05 ceph-mon[61345]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:34.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:34 vm05 ceph-mon[51870]: osd.3 v1:192.168.123.105:6813/4176641888 boot 2026-03-09T20:19:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:34 vm05 ceph-mon[51870]: osdmap e25: 5 total, 4 up, 5 in 2026-03-09T20:19:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:34 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:19:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:34 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:34 vm05 ceph-mon[51870]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:19:35.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:35 vm05 ceph-mon[61345]: osdmap e26: 5 total, 4 up, 5 in 2026-03-09T20:19:35.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:35 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:35.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:35 vm05 ceph-mon[51870]: osdmap e26: 5 total, 4 up, 5 in 2026-03-09T20:19:35.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:35 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:35.981 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:35 vm09 ceph-mon[54524]: osdmap e26: 5 total, 4 up, 5 in 2026-03-09T20:19:35.981 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:35 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:36.836 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:36 vm09 ceph-mon[54524]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:36.836 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:36 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T20:19:36.836 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:36 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:37.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:36 vm05 ceph-mon[61345]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:37.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:36 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T20:19:37.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:36 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:37.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:36 vm05 ceph-mon[51870]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:37.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:36 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T20:19:37.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:36 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:37.721 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:37 vm09 ceph-mon[54524]: Deploying daemon osd.4 on vm09 2026-03-09T20:19:37.721 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:37 vm09 ceph-mon[54524]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:38.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:37 vm05 ceph-mon[61345]: Deploying daemon osd.4 on vm09 2026-03-09T20:19:38.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:37 vm05 ceph-mon[61345]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:38.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:37 vm05 ceph-mon[51870]: Deploying daemon osd.4 on vm09 2026-03-09T20:19:38.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:37 vm05 ceph-mon[51870]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:39.594 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:39 vm09 ceph-mon[54524]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:39.594 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:39 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:39.849 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:39 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:39.849 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:39 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:39.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:39 vm05 ceph-mon[61345]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:39.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:39 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:39.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:39 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:39.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:39 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:39.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:39 vm05 ceph-mon[51870]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:39.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:39 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:39.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:39 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:39.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:39 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:41.107+0000 7ff2e1ffb640 1 -- 192.168.123.109:0/2316368902 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 715239282) 0x7ff2b8002bf0 con 0x7ff2bc0809a0 2026-03-09T20:19:41.108 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 4 on host 'vm09' 2026-03-09T20:19:41.111 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:41.110+0000 7ff2f3270640 1 -- 192.168.123.109:0/2316368902 >> v1:192.168.123.105:6800/3290461294 conn(0x7ff2bc0809a0 legacy=0x7ff2bc082e60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:41.111 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:41.110+0000 7ff2f3270640 1 -- 192.168.123.109:0/2316368902 >> v1:192.168.123.109:6789/0 conn(0x7ff2ec10a870 legacy=0x7ff2ec1a3880 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:41.111 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:41.110+0000 7ff2f3270640 1 -- 192.168.123.109:0/2316368902 shutdown_connections 2026-03-09T20:19:41.111 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:41.110+0000 7ff2f3270640 1 -- 192.168.123.109:0/2316368902 >> 192.168.123.109:0/2316368902 conn(0x7ff2ec0fdec0 msgr2=0x7ff2ec1002b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:19:41.111 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:41.110+0000 7ff2f3270640 1 -- 192.168.123.109:0/2316368902 shutdown_connections 2026-03-09T20:19:41.111 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:41.110+0000 7ff2f3270640 1 -- 192.168.123.109:0/2316368902 wait complete. 2026-03-09T20:19:41.265 DEBUG:teuthology.orchestra.run.vm09:osd.4> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.4.service 2026-03-09T20:19:41.266 INFO:tasks.cephadm:Deploying osd.5 on vm09 with /dev/vdd... 2026-03-09T20:19:41.267 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- lvm zap /dev/vdd 2026-03-09T20:19:41.343 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:41 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.343 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:41 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.343 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:41 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:41.343 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:41 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:41.343 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:41 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.343 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:41 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:41.343 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:41 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.343 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:41 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:41.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:41.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:41.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:41.704 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:19:41.958 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:19:41 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4[58888]: 2026-03-09T20:19:41.719+0000 7fb84daee740 -1 osd.4 0 log_to_monitors true 2026-03-09T20:19:42.500 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:42 vm09 ceph-mon[54524]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:42.500 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:42 vm09 ceph-mon[54524]: from='osd.4 v1:192.168.123.109:6800/4063967321' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:19:42.500 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:42 vm09 ceph-mon[54524]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:19:42.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:42 vm05 ceph-mon[51870]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:42.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:42 vm05 ceph-mon[51870]: from='osd.4 v1:192.168.123.109:6800/4063967321' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:19:42.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:42 vm05 ceph-mon[51870]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:19:42.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:42 vm05 ceph-mon[61345]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:42.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:42 vm05 ceph-mon[61345]: from='osd.4 v1:192.168.123.109:6800/4063967321' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:19:42.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:42 vm05 ceph-mon[61345]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T20:19:43.234 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:19:43.253 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch daemon add osd vm09:/dev/vdd 2026-03-09T20:19:43.481 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:19:43.571 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T20:19:43.571 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: from='osd.4 v1:192.168.123.109:6800/4063967321' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:19:43.571 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: osdmap e27: 5 total, 4 up, 5 in 2026-03-09T20:19:43.571 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:43.571 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:19:43.572 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: Detected new or changed devices on vm09 2026-03-09T20:19:43.572 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:43.572 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:43.572 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:43.572 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: Adjusting osd_memory_target on vm09 to 257.0M 2026-03-09T20:19:43.572 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: Unable to set osd_memory_target on vm09 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-09T20:19:43.572 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:43.572 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:43.572 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:43 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:43.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.641+0000 7ff8e8669640 1 -- 192.168.123.109:0/3509519894 >> v1:192.168.123.109:6789/0 conn(0x7ff8e0102700 legacy=0x7ff8e0102b00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:43.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.642+0000 7ff8e8669640 1 -- 192.168.123.109:0/3509519894 shutdown_connections 2026-03-09T20:19:43.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.642+0000 7ff8e8669640 1 -- 192.168.123.109:0/3509519894 >> 192.168.123.109:0/3509519894 conn(0x7ff8e00fde70 msgr2=0x7ff8e01002d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:19:43.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.642+0000 7ff8e8669640 1 -- 192.168.123.109:0/3509519894 shutdown_connections 2026-03-09T20:19:43.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.642+0000 7ff8e8669640 1 -- 192.168.123.109:0/3509519894 wait complete. 2026-03-09T20:19:43.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.643+0000 7ff8e8669640 1 Processor -- start 2026-03-09T20:19:43.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.643+0000 7ff8e8669640 1 -- start start 2026-03-09T20:19:43.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.643+0000 7ff8e8669640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff8e019a700 con 0x7ff8e010a840 2026-03-09T20:19:43.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.643+0000 7ff8e8669640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff8e01a5ec0 con 0x7ff8e0106b30 2026-03-09T20:19:43.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.643+0000 7ff8e8669640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff8e01a70a0 con 0x7ff8e0102700 2026-03-09T20:19:43.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.643+0000 7ff8e6bdf640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7ff8e010a840 0x7ff8e01a3790 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:43364/0 (socket says 192.168.123.109:43364) 2026-03-09T20:19:43.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.643+0000 7ff8e6bdf640 1 -- 192.168.123.109:0/1522543036 learned_addr learned my addr 192.168.123.109:0/1522543036 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:19:43.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2439086490 0 0) 0x7ff8e01a5ec0 con 0x7ff8e0106b30 2026-03-09T20:19:43.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff8b8003620 con 0x7ff8e0106b30 2026-03-09T20:19:43.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 796429153 0 0) 0x7ff8e01a70a0 con 0x7ff8e0102700 2026-03-09T20:19:43.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff8e01a5ec0 con 0x7ff8e0102700 2026-03-09T20:19:43.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1588928072 0 0) 0x7ff8e01a5ec0 con 0x7ff8e0102700 2026-03-09T20:19:43.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7ff8e01a70a0 con 0x7ff8e0102700 2026-03-09T20:19:43.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3954542373 0 0) 0x7ff8e019a700 con 0x7ff8e010a840 2026-03-09T20:19:43.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff8e01a5ec0 con 0x7ff8e010a840 2026-03-09T20:19:43.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7ff8d00026e0 con 0x7ff8e0102700 2026-03-09T20:19:43.645 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 59598229 0 0) 0x7ff8e01a70a0 con 0x7ff8e0102700 2026-03-09T20:19:43.645 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 >> v1:192.168.123.109:6789/0 conn(0x7ff8e0106b30 legacy=0x7ff8e01a0060 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:43.645 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 >> v1:192.168.123.105:6789/0 conn(0x7ff8e010a840 legacy=0x7ff8e01a3790 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:43.645 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff8e01a8280 con 0x7ff8e0102700 2026-03-09T20:19:43.645 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.644+0000 7ff8e8669640 1 -- 192.168.123.109:0/1522543036 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7ff8e01a4f10 con 0x7ff8e0102700 2026-03-09T20:19:43.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.645+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7ff8d00032c0 con 0x7ff8e0102700 2026-03-09T20:19:43.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.645+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7ff8d0004d90 con 0x7ff8e0102700 2026-03-09T20:19:43.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.645+0000 7ff8e8669640 1 -- 192.168.123.109:0/1522543036 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7ff8e01a54f0 con 0x7ff8e0102700 2026-03-09T20:19:43.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.646+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7ff8d0002e70 con 0x7ff8e0102700 2026-03-09T20:19:43.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.646+0000 7ff8e8669640 1 -- 192.168.123.109:0/1522543036 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff8a8005180 con 0x7ff8e0102700 2026-03-09T20:19:43.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.647+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(28..28 src has 1..28) ==== 3045+0+0 (unknown 4281602760 0 0) 0x7ff8d0093220 con 0x7ff8e0102700 2026-03-09T20:19:43.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.650+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7ff8d005ce30 con 0x7ff8e0102700 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: from='osd.4 v1:192.168.123.109:6800/4063967321' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: osdmap e27: 5 total, 4 up, 5 in 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: Detected new or changed devices on vm09 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: Adjusting osd_memory_target on vm09 to 257.0M 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: Unable to set osd_memory_target on vm09 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: from='osd.4 v1:192.168.123.109:6800/4063967321' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: osdmap e27: 5 total, 4 up, 5 in 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:43.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:19:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: Detected new or changed devices on vm09 2026-03-09T20:19:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: Adjusting osd_memory_target on vm09 to 257.0M 2026-03-09T20:19:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: Unable to set osd_memory_target on vm09 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-09T20:19:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:43 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:43.755 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:43.755+0000 7ff8e8669640 1 -- 192.168.123.109:0/1522543036 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}) -- 0x7ff8a8002bf0 con 0x7ff8b807c860 2026-03-09T20:19:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:44 vm09 ceph-mon[54524]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T20:19:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:44 vm09 ceph-mon[54524]: osdmap e28: 5 total, 4 up, 5 in 2026-03-09T20:19:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:44 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:44 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:44 vm09 ceph-mon[54524]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:44 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:44 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:44 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:44.523 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:19:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4[58888]: 2026-03-09T20:19:44.394+0000 7fb84a282640 -1 osd.4 0 waiting for initial osdmap 2026-03-09T20:19:44.523 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:19:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4[58888]: 2026-03-09T20:19:44.402+0000 7fb845098640 -1 osd.4 29 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:19:44.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[61345]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T20:19:44.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[61345]: osdmap e28: 5 total, 4 up, 5 in 2026-03-09T20:19:44.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:44.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:44.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[61345]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:44.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:44.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:44.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:44.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[51870]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T20:19:44.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[51870]: osdmap e28: 5 total, 4 up, 5 in 2026-03-09T20:19:44.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:44.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[51870]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 507 MiB used, 79 GiB / 80 GiB avail 2026-03-09T20:19:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:44 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:45 vm09 ceph-mon[54524]: purged_snaps scrub starts 2026-03-09T20:19:45.345 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:45 vm09 ceph-mon[54524]: purged_snaps scrub ok 2026-03-09T20:19:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[61345]: purged_snaps scrub starts 2026-03-09T20:19:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[61345]: purged_snaps scrub ok 2026-03-09T20:19:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[61345]: from='client.24239 v1:192.168.123.109:0/1522543036' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[61345]: osdmap e29: 5 total, 4 up, 5 in 2026-03-09T20:19:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[61345]: from='osd.4 ' entity='osd.4' 2026-03-09T20:19:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/1309911930' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "61fedd79-419a-4176-9825-9d059c9d73f0"}]: dispatch 2026-03-09T20:19:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/1309911930' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "61fedd79-419a-4176-9825-9d059c9d73f0"}]': finished 2026-03-09T20:19:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[61345]: osd.4 v1:192.168.123.109:6800/4063967321 boot 2026-03-09T20:19:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[61345]: osdmap e30: 6 total, 5 up, 6 in 2026-03-09T20:19:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/2685874896' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[51870]: purged_snaps scrub starts 2026-03-09T20:19:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[51870]: purged_snaps scrub ok 2026-03-09T20:19:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[51870]: from='client.24239 v1:192.168.123.109:0/1522543036' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[51870]: osdmap e29: 5 total, 4 up, 5 in 2026-03-09T20:19:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[51870]: from='osd.4 ' entity='osd.4' 2026-03-09T20:19:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/1309911930' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "61fedd79-419a-4176-9825-9d059c9d73f0"}]: dispatch 2026-03-09T20:19:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/1309911930' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "61fedd79-419a-4176-9825-9d059c9d73f0"}]': finished 2026-03-09T20:19:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[51870]: osd.4 v1:192.168.123.109:6800/4063967321 boot 2026-03-09T20:19:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[51870]: osdmap e30: 6 total, 5 up, 6 in 2026-03-09T20:19:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/2685874896' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:45 vm09 ceph-mon[54524]: from='client.24239 v1:192.168.123.109:0/1522543036' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:45 vm09 ceph-mon[54524]: osdmap e29: 5 total, 4 up, 5 in 2026-03-09T20:19:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:45 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:45 vm09 ceph-mon[54524]: from='osd.4 ' entity='osd.4' 2026-03-09T20:19:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.109:0/1309911930' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "61fedd79-419a-4176-9825-9d059c9d73f0"}]: dispatch 2026-03-09T20:19:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.109:0/1309911930' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "61fedd79-419a-4176-9825-9d059c9d73f0"}]': finished 2026-03-09T20:19:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:45 vm09 ceph-mon[54524]: osd.4 v1:192.168.123.109:6800/4063967321 boot 2026-03-09T20:19:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:45 vm09 ceph-mon[54524]: osdmap e30: 6 total, 5 up, 6 in 2026-03-09T20:19:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:45 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:19:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:45 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.109:0/2685874896' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:46.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:46 vm05 ceph-mon[61345]: pgmap v64: 1 pgs: 1 remapped+peering; 449 KiB data, 933 MiB used, 99 GiB / 100 GiB avail 2026-03-09T20:19:46.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:46 vm05 ceph-mon[61345]: osdmap e31: 6 total, 5 up, 6 in 2026-03-09T20:19:46.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:46 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:46.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:46 vm05 ceph-mon[51870]: pgmap v64: 1 pgs: 1 remapped+peering; 449 KiB data, 933 MiB used, 99 GiB / 100 GiB avail 2026-03-09T20:19:46.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:46 vm05 ceph-mon[51870]: osdmap e31: 6 total, 5 up, 6 in 2026-03-09T20:19:46.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:46 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:46.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:46 vm09 ceph-mon[54524]: pgmap v64: 1 pgs: 1 remapped+peering; 449 KiB data, 933 MiB used, 99 GiB / 100 GiB avail 2026-03-09T20:19:46.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:46 vm09 ceph-mon[54524]: osdmap e31: 6 total, 5 up, 6 in 2026-03-09T20:19:46.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:46 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:48.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:47 vm09 ceph-mon[54524]: osdmap e32: 6 total, 5 up, 6 in 2026-03-09T20:19:48.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:47 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:48.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:47 vm09 ceph-mon[54524]: pgmap v67: 1 pgs: 1 remapped+peering; 449 KiB data, 934 MiB used, 99 GiB / 100 GiB avail 2026-03-09T20:19:48.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:47 vm05 ceph-mon[61345]: osdmap e32: 6 total, 5 up, 6 in 2026-03-09T20:19:48.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:47 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:48.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:47 vm05 ceph-mon[61345]: pgmap v67: 1 pgs: 1 remapped+peering; 449 KiB data, 934 MiB used, 99 GiB / 100 GiB avail 2026-03-09T20:19:48.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:47 vm05 ceph-mon[51870]: osdmap e32: 6 total, 5 up, 6 in 2026-03-09T20:19:48.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:47 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:48.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:47 vm05 ceph-mon[51870]: pgmap v67: 1 pgs: 1 remapped+peering; 449 KiB data, 934 MiB used, 99 GiB / 100 GiB avail 2026-03-09T20:19:49.842 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:49 vm09 ceph-mon[54524]: pgmap v68: 1 pgs: 1 remapped+peering; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail 2026-03-09T20:19:49.843 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:49 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T20:19:49.843 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:49 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:49.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:49 vm05 ceph-mon[61345]: pgmap v68: 1 pgs: 1 remapped+peering; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail 2026-03-09T20:19:49.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:49 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T20:19:49.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:49 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:49.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:49 vm05 ceph-mon[51870]: pgmap v68: 1 pgs: 1 remapped+peering; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail 2026-03-09T20:19:49.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:49 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T20:19:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:49 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:50.655 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:50 vm09 ceph-mon[54524]: Deploying daemon osd.5 on vm09 2026-03-09T20:19:50.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:50 vm05 ceph-mon[61345]: Deploying daemon osd.5 on vm09 2026-03-09T20:19:50.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:50 vm05 ceph-mon[51870]: Deploying daemon osd.5 on vm09 2026-03-09T20:19:51.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:51 vm09 ceph-mon[54524]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail; 66 KiB/s, 0 objects/s recovering 2026-03-09T20:19:52.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:51 vm05 ceph-mon[61345]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail; 66 KiB/s, 0 objects/s recovering 2026-03-09T20:19:52.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:51 vm05 ceph-mon[51870]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail; 66 KiB/s, 0 objects/s recovering 2026-03-09T20:19:53.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:52 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:53.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:52 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:53.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:52 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:53.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:52 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:53.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:52 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:53.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:52 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:53.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:52 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:53.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:52 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:53.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:52 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:53.487 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 5 on host 'vm09' 2026-03-09T20:19:53.488 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:53.485+0000 7ff8c77fe640 1 -- 192.168.123.109:0/1522543036 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 1967485741) 0x7ff8a8002bf0 con 0x7ff8b807c860 2026-03-09T20:19:53.488 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:53.487+0000 7ff8e8669640 1 -- 192.168.123.109:0/1522543036 >> v1:192.168.123.105:6800/3290461294 conn(0x7ff8b807c860 legacy=0x7ff8b807ed20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:53.488 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:53.487+0000 7ff8e8669640 1 -- 192.168.123.109:0/1522543036 >> v1:192.168.123.105:6790/0 conn(0x7ff8e0102700 legacy=0x7ff8e0199b80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:53.488 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:53.487+0000 7ff8e8669640 1 -- 192.168.123.109:0/1522543036 shutdown_connections 2026-03-09T20:19:53.488 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:53.487+0000 7ff8e8669640 1 -- 192.168.123.109:0/1522543036 >> 192.168.123.109:0/1522543036 conn(0x7ff8e00fde70 msgr2=0x7ff8e0108f80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:19:53.488 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:53.487+0000 7ff8e8669640 1 -- 192.168.123.109:0/1522543036 shutdown_connections 2026-03-09T20:19:53.488 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:53.487+0000 7ff8e8669640 1 -- 192.168.123.109:0/1522543036 wait complete. 2026-03-09T20:19:53.644 DEBUG:teuthology.orchestra.run.vm09:osd.5> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.5.service 2026-03-09T20:19:53.645 INFO:tasks.cephadm:Deploying osd.6 on vm09 with /dev/vdc... 2026-03-09T20:19:53.645 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- lvm zap /dev/vdc 2026-03-09T20:19:53.979 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:19:54.008 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:53 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.008 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:53 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.008 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:53 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:54.008 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:53 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:54.008 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:53 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.008 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:53 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:54.008 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:53 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.008 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:53 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.008 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:53 vm09 ceph-mon[54524]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T20:19:54.262 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:19:54 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:19:54.095+0000 7fdab2f75740 -1 osd.5 0 log_to_monitors true 2026-03-09T20:19:54.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:54.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:54.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:54.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[61345]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T20:19:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:53 vm05 ceph-mon[51870]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T20:19:55.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:54 vm09 ceph-mon[54524]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T20:19:55.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:55.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:55.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:55.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:55.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:55.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:54 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:55.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[61345]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T20:19:55.417 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:55.417 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:55.418 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:55.418 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:55.418 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.418 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:55.418 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:55.418 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[51870]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T20:19:55.418 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:55.418 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:55.418 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:55.418 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:55.418 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.418 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:55.418 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:54 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:19:55.600 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:19:55.621 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch daemon add osd vm09:/dev/vdc 2026-03-09T20:19:55.812 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:19:56.017 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.016+0000 7f3417820640 1 -- 192.168.123.109:0/2265909400 >> v1:192.168.123.105:6789/0 conn(0x7f341010cb20 legacy=0x7f341010ef70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:56.017 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.017+0000 7f3417820640 1 -- 192.168.123.109:0/2265909400 shutdown_connections 2026-03-09T20:19:56.017 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.017+0000 7f3417820640 1 -- 192.168.123.109:0/2265909400 >> 192.168.123.109:0/2265909400 conn(0x7f34100fc4a0 msgr2=0x7f34100fe900 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:19:56.017 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.017+0000 7f3417820640 1 -- 192.168.123.109:0/2265909400 shutdown_connections 2026-03-09T20:19:56.017 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.017+0000 7f3417820640 1 -- 192.168.123.109:0/2265909400 wait complete. 2026-03-09T20:19:56.018 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.017+0000 7f3417820640 1 Processor -- start 2026-03-09T20:19:56.018 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.018+0000 7f3417820640 1 -- start start 2026-03-09T20:19:56.018 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.018+0000 7f3417820640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f341019c990 con 0x7f341010cb20 2026-03-09T20:19:56.018 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.018+0000 7f3417820640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f34101a8150 con 0x7f3410102bc0 2026-03-09T20:19:56.018 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.018+0000 7f3417820640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f34101a9330 con 0x7f3410109010 2026-03-09T20:19:56.019 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.018+0000 7f3414d94640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f3410109010 0x7f34101a22f0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.109:36714/0 (socket says 192.168.123.109:36714) 2026-03-09T20:19:56.019 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.018+0000 7f3414d94640 1 -- 192.168.123.109:0/1970691178 learned_addr learned my addr 192.168.123.109:0/1970691178 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:19:56.020 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.019+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1018893947 0 0) 0x7f341019c990 con 0x7f341010cb20 2026-03-09T20:19:56.020 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.019+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f33ec003620 con 0x7f341010cb20 2026-03-09T20:19:56.020 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.019+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 118093504 0 0) 0x7f33ec003620 con 0x7f341010cb20 2026-03-09T20:19:56.020 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.019+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f341019c990 con 0x7f341010cb20 2026-03-09T20:19:56.020 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.019+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f340c002fc0 con 0x7f341010cb20 2026-03-09T20:19:56.020 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.020+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3196351193 0 0) 0x7f341019c990 con 0x7f341010cb20 2026-03-09T20:19:56.020 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.020+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 >> v1:192.168.123.105:6790/0 conn(0x7f3410109010 legacy=0x7f34101a22f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:56.020 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.020+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 >> v1:192.168.123.109:6789/0 conn(0x7f3410102bc0 legacy=0x7f341019be10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:19:56.020 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.020+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f34101aa510 con 0x7f341010cb20 2026-03-09T20:19:56.021 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.020+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f340c003410 con 0x7f341010cb20 2026-03-09T20:19:56.021 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.020+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f340c005300 con 0x7f341010cb20 2026-03-09T20:19:56.021 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.020+0000 7f3417820640 1 -- 192.168.123.109:0/1970691178 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f34101a8380 con 0x7f341010cb20 2026-03-09T20:19:56.021 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.020+0000 7f3417820640 1 -- 192.168.123.109:0/1970691178 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f34101a8960 con 0x7f341010cb20 2026-03-09T20:19:56.021 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.021+0000 7f3417820640 1 -- 192.168.123.109:0/1970691178 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f33d8005180 con 0x7f341010cb20 2026-03-09T20:19:56.023 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.022+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f340c003b30 con 0x7f341010cb20 2026-03-09T20:19:56.023 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.022+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(34..34 src has 1..34) ==== 3337+0+0 (unknown 1848175113 0 0) 0x7f340c0938a0 con 0x7f341010cb20 2026-03-09T20:19:56.024 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.024+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f340c05d390 con 0x7f341010cb20 2026-03-09T20:19:56.133 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:19:56.132+0000 7f3417820640 1 -- 192.168.123.109:0/1970691178 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}) -- 0x7f33d8002bf0 con 0x7f33ec0781c0 2026-03-09T20:19:56.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:56 vm09 ceph-mon[54524]: Detected new or changed devices on vm09 2026-03-09T20:19:56.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:56 vm09 ceph-mon[54524]: Adjusting osd_memory_target on vm09 to 128.5M 2026-03-09T20:19:56.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:56 vm09 ceph-mon[54524]: Unable to set osd_memory_target on vm09 to 134768230: error parsing value: Value '134768230' is below minimum 939524096 2026-03-09T20:19:56.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:56 vm09 ceph-mon[54524]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T20:19:56.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:56 vm09 ceph-mon[54524]: osdmap e33: 6 total, 5 up, 6 in 2026-03-09T20:19:56.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:56 vm09 ceph-mon[54524]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:19:56.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:56 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:56.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:56 vm09 ceph-mon[54524]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail; 51 KiB/s, 0 objects/s recovering 2026-03-09T20:19:56.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[61345]: Detected new or changed devices on vm09 2026-03-09T20:19:56.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[61345]: Adjusting osd_memory_target on vm09 to 128.5M 2026-03-09T20:19:56.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[61345]: Unable to set osd_memory_target on vm09 to 134768230: error parsing value: Value '134768230' is below minimum 939524096 2026-03-09T20:19:56.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[61345]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T20:19:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[61345]: osdmap e33: 6 total, 5 up, 6 in 2026-03-09T20:19:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[61345]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:19:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[61345]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail; 51 KiB/s, 0 objects/s recovering 2026-03-09T20:19:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[51870]: Detected new or changed devices on vm09 2026-03-09T20:19:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[51870]: Adjusting osd_memory_target on vm09 to 128.5M 2026-03-09T20:19:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[51870]: Unable to set osd_memory_target on vm09 to 134768230: error parsing value: Value '134768230' is below minimum 939524096 2026-03-09T20:19:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[51870]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T20:19:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[51870]: osdmap e33: 6 total, 5 up, 6 in 2026-03-09T20:19:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[51870]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:19:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:56 vm05 ceph-mon[51870]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail; 51 KiB/s, 0 objects/s recovering 2026-03-09T20:19:57.206 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:19:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:19:57.142+0000 7fdaaf709640 -1 osd.5 0 waiting for initial osdmap 2026-03-09T20:19:57.206 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:19:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:19:57.151+0000 7fdaaa51f640 -1 osd.5 35 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:19:57.206 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:57 vm09 ceph-mon[54524]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T20:19:57.206 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:57 vm09 ceph-mon[54524]: osdmap e34: 6 total, 5 up, 6 in 2026-03-09T20:19:57.206 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:57 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:57.206 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:57 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:57.206 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:57 vm09 ceph-mon[54524]: from='client.14385 v1:192.168.123.109:0/1970691178' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:57.206 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:57 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:57.206 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:57 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:57.206 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:57 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:57.206 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:57 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[61345]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T20:19:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[61345]: osdmap e34: 6 total, 5 up, 6 in 2026-03-09T20:19:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[61345]: from='client.14385 v1:192.168.123.109:0/1970691178' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[51870]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T20:19:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[51870]: osdmap e34: 6 total, 5 up, 6 in 2026-03-09T20:19:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[51870]: from='client.14385 v1:192.168.123.109:0/1970691178' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:19:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:19:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:57 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:58.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: purged_snaps scrub starts 2026-03-09T20:19:58.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: purged_snaps scrub ok 2026-03-09T20:19:58.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: osdmap e35: 6 total, 5 up, 6 in 2026-03-09T20:19:58.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:58.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' 2026-03-09T20:19:58.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/426632494' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4965700-0e14-493b-8c85-282e7ba1da51"}]: dispatch 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: from='client.24274 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4965700-0e14-493b-8c85-282e7ba1da51"}]: dispatch 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: from='client.24274 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d4965700-0e14-493b-8c85-282e7ba1da51"}]': finished 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: osd.5 v1:192.168.123.109:6804/3558334635 boot 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: osdmap e36: 7 total, 6 up, 7 in 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/2193141180' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: purged_snaps scrub starts 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: purged_snaps scrub ok 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: osdmap e35: 6 total, 5 up, 6 in 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/426632494' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4965700-0e14-493b-8c85-282e7ba1da51"}]: dispatch 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: from='client.24274 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4965700-0e14-493b-8c85-282e7ba1da51"}]: dispatch 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: from='client.24274 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d4965700-0e14-493b-8c85-282e7ba1da51"}]': finished 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: osd.5 v1:192.168.123.109:6804/3558334635 boot 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: osdmap e36: 7 total, 6 up, 7 in 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail 2026-03-09T20:19:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/2193141180' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:58.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: purged_snaps scrub starts 2026-03-09T20:19:58.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: purged_snaps scrub ok 2026-03-09T20:19:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: osdmap e35: 6 total, 5 up, 6 in 2026-03-09T20:19:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: from='osd.5 v1:192.168.123.109:6804/3558334635' entity='osd.5' 2026-03-09T20:19:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.109:0/426632494' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4965700-0e14-493b-8c85-282e7ba1da51"}]: dispatch 2026-03-09T20:19:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: from='client.24274 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4965700-0e14-493b-8c85-282e7ba1da51"}]: dispatch 2026-03-09T20:19:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: from='client.24274 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d4965700-0e14-493b-8c85-282e7ba1da51"}]': finished 2026-03-09T20:19:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: osd.5 v1:192.168.123.109:6804/3558334635 boot 2026-03-09T20:19:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: osdmap e36: 7 total, 6 up, 7 in 2026-03-09T20:19:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:19:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:19:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 99 GiB / 100 GiB avail 2026-03-09T20:19:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.109:0/2193141180' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:19:59.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:59 vm09 ceph-mon[54524]: osdmap e37: 7 total, 6 up, 7 in 2026-03-09T20:19:59.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:19:59 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:19:59.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:59 vm05 ceph-mon[61345]: osdmap e37: 7 total, 6 up, 7 in 2026-03-09T20:19:59.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:19:59 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:19:59.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:59 vm05 ceph-mon[51870]: osdmap e37: 7 total, 6 up, 7 in 2026-03-09T20:19:59.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:19:59 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:00.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:00 vm09 ceph-mon[54524]: osdmap e38: 7 total, 6 up, 7 in 2026-03-09T20:20:00.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:00 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:00.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:00 vm09 ceph-mon[54524]: pgmap v79: 1 pgs: 1 remapped; 449 KiB data, 960 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:20:00.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:00 vm09 ceph-mon[54524]: overall HEALTH_OK 2026-03-09T20:20:00.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:00 vm05 ceph-mon[61345]: osdmap e38: 7 total, 6 up, 7 in 2026-03-09T20:20:00.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:00 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:00.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:00 vm05 ceph-mon[61345]: pgmap v79: 1 pgs: 1 remapped; 449 KiB data, 960 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:20:00.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:00 vm05 ceph-mon[61345]: overall HEALTH_OK 2026-03-09T20:20:00.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:00 vm05 ceph-mon[51870]: osdmap e38: 7 total, 6 up, 7 in 2026-03-09T20:20:00.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:00 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:00.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:00 vm05 ceph-mon[51870]: pgmap v79: 1 pgs: 1 remapped; 449 KiB data, 960 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:20:00.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:00 vm05 ceph-mon[51870]: overall HEALTH_OK 2026-03-09T20:20:02.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:02 vm09 ceph-mon[54524]: pgmap v80: 1 pgs: 1 remapped; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:20:02.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:02 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T20:20:02.640 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:02 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:02.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:02 vm05 ceph-mon[61345]: pgmap v80: 1 pgs: 1 remapped; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:20:02.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:02 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T20:20:02.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:02 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:02.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:02 vm05 ceph-mon[51870]: pgmap v80: 1 pgs: 1 remapped; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-09T20:20:02.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:02 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T20:20:02.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:02 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:03.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:03 vm05 ceph-mon[61345]: Deploying daemon osd.6 on vm09 2026-03-09T20:20:03.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:03 vm05 ceph-mon[51870]: Deploying daemon osd.6 on vm09 2026-03-09T20:20:03.977 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:03 vm09 ceph-mon[54524]: Deploying daemon osd.6 on vm09 2026-03-09T20:20:04.601 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:04 vm09 ceph-mon[54524]: pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T20:20:04.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:04 vm05 ceph-mon[61345]: pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T20:20:04.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:04 vm05 ceph-mon[51870]: pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T20:20:05.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:05.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:05.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:05.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:05.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:05.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:05.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:05 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:05 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:05.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:05 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:05 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:05 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:05 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:05 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:05.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:05 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:05.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:05 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:05.957 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:05.957+0000 7f33fe7fc640 1 -- 192.168.123.109:0/1970691178 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 2506627020) 0x7f33d8002bf0 con 0x7f33ec0781c0 2026-03-09T20:20:05.957 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 6 on host 'vm09' 2026-03-09T20:20:05.961 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:05.961+0000 7f3417820640 1 -- 192.168.123.109:0/1970691178 >> v1:192.168.123.105:6800/3290461294 conn(0x7f33ec0781c0 legacy=0x7f33ec07a680 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:05.961 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:05.961+0000 7f3417820640 1 -- 192.168.123.109:0/1970691178 >> v1:192.168.123.105:6789/0 conn(0x7f341010cb20 legacy=0x7f34101a5a20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:05.961 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:05.961+0000 7f3417820640 1 -- 192.168.123.109:0/1970691178 shutdown_connections 2026-03-09T20:20:05.961 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:05.961+0000 7f3417820640 1 -- 192.168.123.109:0/1970691178 >> 192.168.123.109:0/1970691178 conn(0x7f34100fc4a0 msgr2=0x7f341010ef30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:05.961 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:05.961+0000 7f3417820640 1 -- 192.168.123.109:0/1970691178 shutdown_connections 2026-03-09T20:20:05.962 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:05.961+0000 7f3417820640 1 -- 192.168.123.109:0/1970691178 wait complete. 2026-03-09T20:20:06.121 DEBUG:teuthology.orchestra.run.vm09:osd.6> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.6.service 2026-03-09T20:20:06.122 INFO:tasks.cephadm:Deploying osd.7 on vm09 with /dev/vdb... 2026-03-09T20:20:06.122 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- lvm zap /dev/vdb 2026-03-09T20:20:06.424 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:20:06 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:20:06.277+0000 7f0c4be13740 -1 osd.6 0 log_to_monitors true 2026-03-09T20:20:06.460 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:20:06.714 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:06 vm09 ceph-mon[54524]: pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T20:20:06.714 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:06 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:06.714 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:06 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:06.714 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:06 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:06.714 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:06 vm09 ceph-mon[54524]: from='osd.6 v1:192.168.123.109:6808/3079043049' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T20:20:06.714 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:06 vm09 ceph-mon[54524]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T20:20:06.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:06 vm05 ceph-mon[61345]: pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T20:20:06.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:06 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:06.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:06 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:06.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:06 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:06.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:06 vm05 ceph-mon[61345]: from='osd.6 v1:192.168.123.109:6808/3079043049' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T20:20:06.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:06 vm05 ceph-mon[61345]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T20:20:06.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:06 vm05 ceph-mon[51870]: pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T20:20:06.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:06 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:06.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:06 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:06.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:06 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:06.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:06 vm05 ceph-mon[51870]: from='osd.6 v1:192.168.123.109:6808/3079043049' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T20:20:06.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:06 vm05 ceph-mon[51870]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T20:20:07.979 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:20:07.995 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch daemon add osd vm09:/dev/vdb 2026-03-09T20:20:08.169 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: Detected new or changed devices on vm09 2026-03-09T20:20:08.169 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:08.169 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:08.169 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:08.169 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:08.169 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:08.169 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: Adjusting osd_memory_target on vm09 to 87739k 2026-03-09T20:20:08.169 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: Unable to set osd_memory_target on vm09 to 89845486: error parsing value: Value '89845486' is below minimum 939524096 2026-03-09T20:20:08.169 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:08.169 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:08.170 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:08.170 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T20:20:08.170 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: from='osd.6 v1:192.168.123.109:6808/3079043049' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:20:08.170 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: osdmap e39: 7 total, 6 up, 7 in 2026-03-09T20:20:08.170 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:08.170 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:20:08.170 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:08 vm09 ceph-mon[54524]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T20:20:08.173 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:20:08.313 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.311+0000 7f06451d9640 1 -- 192.168.123.109:0/4059938916 >> v1:192.168.123.105:6789/0 conn(0x7f064010cad0 legacy=0x7f064010ef90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:08.313 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.312+0000 7f06451d9640 1 -- 192.168.123.109:0/4059938916 shutdown_connections 2026-03-09T20:20:08.313 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.312+0000 7f06451d9640 1 -- 192.168.123.109:0/4059938916 >> 192.168.123.109:0/4059938916 conn(0x7f0640100120 msgr2=0x7f0640102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:08.313 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.313+0000 7f06451d9640 1 -- 192.168.123.109:0/4059938916 shutdown_connections 2026-03-09T20:20:08.313 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.313+0000 7f06451d9640 1 -- 192.168.123.109:0/4059938916 wait complete. 2026-03-09T20:20:08.313 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.313+0000 7f06451d9640 1 Processor -- start 2026-03-09T20:20:08.313 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.313+0000 7f06451d9640 1 -- start start 2026-03-09T20:20:08.314 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.313+0000 7f06451d9640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f064019caf0 con 0x7f0640104990 2026-03-09T20:20:08.314 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.313+0000 7f06451d9640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f06401a82b0 con 0x7f064010cad0 2026-03-09T20:20:08.314 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.313+0000 7f06451d9640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f06401a9490 con 0x7f0640108dc0 2026-03-09T20:20:08.314 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.313+0000 7f063f577640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f064010cad0 0x7f06401a5b80 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.109:46068/0 (socket says 192.168.123.109:46068) 2026-03-09T20:20:08.314 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.313+0000 7f063f577640 1 -- 192.168.123.109:0/917183162 learned_addr learned my addr 192.168.123.109:0/917183162 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:20:08.314 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1546275421 0 0) 0x7f06401a82b0 con 0x7f064010cad0 2026-03-09T20:20:08.314 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0608003620 con 0x7f064010cad0 2026-03-09T20:20:08.314 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2584568785 0 0) 0x7f06401a9490 con 0x7f0640108dc0 2026-03-09T20:20:08.314 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f06401a82b0 con 0x7f0640108dc0 2026-03-09T20:20:08.314 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 850226035 0 0) 0x7f064019caf0 con 0x7f0640104990 2026-03-09T20:20:08.314 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f06401a9490 con 0x7f0640104990 2026-03-09T20:20:08.314 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3557498690 0 0) 0x7f06401a9490 con 0x7f0640104990 2026-03-09T20:20:08.315 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f064019caf0 con 0x7f0640104990 2026-03-09T20:20:08.315 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f06280030c0 con 0x7f0640104990 2026-03-09T20:20:08.315 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1897950651 0 0) 0x7f064019caf0 con 0x7f0640104990 2026-03-09T20:20:08.315 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 >> v1:192.168.123.105:6790/0 conn(0x7f0640108dc0 legacy=0x7f06401a2450 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:08.315 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 >> v1:192.168.123.109:6789/0 conn(0x7f064010cad0 legacy=0x7f06401a5b80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:08.315 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f06401aa670 con 0x7f0640104990 2026-03-09T20:20:08.315 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f06451d9640 1 -- 192.168.123.109:0/917183162 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f06401a84e0 con 0x7f0640104990 2026-03-09T20:20:08.316 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.314+0000 7f06451d9640 1 -- 192.168.123.109:0/917183162 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f06401a8a20 con 0x7f0640104990 2026-03-09T20:20:08.316 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.315+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f0628003a20 con 0x7f0640104990 2026-03-09T20:20:08.316 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.315+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f0628004a00 con 0x7f0640104990 2026-03-09T20:20:08.316 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.316+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f06280035d0 con 0x7f0640104990 2026-03-09T20:20:08.317 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.316+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(40..40 src has 1..40) ==== 3629+0+0 (unknown 1488364030 0 0) 0x7f0628093380 con 0x7f0640104990 2026-03-09T20:20:08.317 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.316+0000 7f06451d9640 1 -- 192.168.123.109:0/917183162 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f060c005180 con 0x7f0640104990 2026-03-09T20:20:08.322 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.322+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f062805cd50 con 0x7f0640104990 2026-03-09T20:20:08.420 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:08.419+0000 7f06451d9640 1 -- 192.168.123.109:0/917183162 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}) -- 0x7f060c002bf0 con 0x7f0608078260 2026-03-09T20:20:08.421 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:20:08 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:20:08.293+0000 7f0c47d94640 -1 osd.6 0 waiting for initial osdmap 2026-03-09T20:20:08.421 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:20:08 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:20:08.298+0000 7f0c43bbe640 -1 osd.6 40 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: Detected new or changed devices on vm09 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: Adjusting osd_memory_target on vm09 to 87739k 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: Unable to set osd_memory_target on vm09 to 89845486: error parsing value: Value '89845486' is below minimum 939524096 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: from='osd.6 v1:192.168.123.109:6808/3079043049' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: osdmap e39: 7 total, 6 up, 7 in 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:20:08.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[61345]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: Detected new or changed devices on vm09 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: Adjusting osd_memory_target on vm09 to 87739k 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: Unable to set osd_memory_target on vm09 to 89845486: error parsing value: Value '89845486' is below minimum 939524096 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: from='osd.6 v1:192.168.123.109:6808/3079043049' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: osdmap e39: 7 total, 6 up, 7 in 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:20:08.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:08 vm05 ceph-mon[51870]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T20:20:09.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:09 vm09 ceph-mon[54524]: purged_snaps scrub starts 2026-03-09T20:20:09.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:09 vm09 ceph-mon[54524]: purged_snaps scrub ok 2026-03-09T20:20:09.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:09 vm09 ceph-mon[54524]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T20:20:09.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:09 vm09 ceph-mon[54524]: osdmap e40: 7 total, 6 up, 7 in 2026-03-09T20:20:09.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:09 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:09.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:09 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:09.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:09 vm09 ceph-mon[54524]: from='client.14409 v1:192.168.123.109:0/917183162' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:09.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:09 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:09.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:09 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:09.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:09 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[61345]: purged_snaps scrub starts 2026-03-09T20:20:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[61345]: purged_snaps scrub ok 2026-03-09T20:20:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[61345]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T20:20:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[61345]: osdmap e40: 7 total, 6 up, 7 in 2026-03-09T20:20:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[61345]: from='client.14409 v1:192.168.123.109:0/917183162' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[51870]: purged_snaps scrub starts 2026-03-09T20:20:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[51870]: purged_snaps scrub ok 2026-03-09T20:20:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[51870]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T20:20:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[51870]: osdmap e40: 7 total, 6 up, 7 in 2026-03-09T20:20:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[51870]: from='client.14409 v1:192.168.123.109:0/917183162' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:09 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:10.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[61345]: osd.6 v1:192.168.123.109:6808/3079043049 boot 2026-03-09T20:20:10.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[61345]: osdmap e41: 7 total, 7 up, 7 in 2026-03-09T20:20:10.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:10.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/3106335161' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ae4f5298-3a65-4f5e-b653-7ee92ac3f2a9"}]: dispatch 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[61345]: from='client.24301 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ae4f5298-3a65-4f5e-b653-7ee92ac3f2a9"}]: dispatch 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[61345]: from='client.24301 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ae4f5298-3a65-4f5e-b653-7ee92ac3f2a9"}]': finished 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[61345]: osdmap e42: 8 total, 7 up, 8 in 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[61345]: pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/1067222609' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[51870]: osd.6 v1:192.168.123.109:6808/3079043049 boot 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[51870]: osdmap e41: 7 total, 7 up, 7 in 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/3106335161' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ae4f5298-3a65-4f5e-b653-7ee92ac3f2a9"}]: dispatch 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[51870]: from='client.24301 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ae4f5298-3a65-4f5e-b653-7ee92ac3f2a9"}]: dispatch 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[51870]: from='client.24301 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ae4f5298-3a65-4f5e-b653-7ee92ac3f2a9"}]': finished 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[51870]: osdmap e42: 8 total, 7 up, 8 in 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[51870]: pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/1067222609' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:10.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:10 vm09 ceph-mon[54524]: osd.6 v1:192.168.123.109:6808/3079043049 boot 2026-03-09T20:20:10.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:10 vm09 ceph-mon[54524]: osdmap e41: 7 total, 7 up, 7 in 2026-03-09T20:20:10.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:10 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:10.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.109:0/3106335161' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ae4f5298-3a65-4f5e-b653-7ee92ac3f2a9"}]: dispatch 2026-03-09T20:20:10.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:10 vm09 ceph-mon[54524]: from='client.24301 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ae4f5298-3a65-4f5e-b653-7ee92ac3f2a9"}]: dispatch 2026-03-09T20:20:10.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:10 vm09 ceph-mon[54524]: from='client.24301 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ae4f5298-3a65-4f5e-b653-7ee92ac3f2a9"}]': finished 2026-03-09T20:20:10.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:10 vm09 ceph-mon[54524]: osdmap e42: 8 total, 7 up, 8 in 2026-03-09T20:20:10.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:10 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:10.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:10 vm09 ceph-mon[54524]: pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:10.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.109:0/1067222609' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:11 vm05 ceph-mon[61345]: osdmap e43: 8 total, 7 up, 8 in 2026-03-09T20:20:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:11 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:11.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:11 vm05 ceph-mon[51870]: osdmap e43: 8 total, 7 up, 8 in 2026-03-09T20:20:11.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:11 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:11.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:11 vm09 ceph-mon[54524]: osdmap e43: 8 total, 7 up, 8 in 2026-03-09T20:20:11.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:11 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:12 vm05 ceph-mon[61345]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:12.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:12 vm05 ceph-mon[51870]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:12.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:12 vm09 ceph-mon[54524]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:14.868 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:14 vm09 ceph-mon[54524]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:14.869 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:14 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T20:20:14.869 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:14 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:14.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:14 vm05 ceph-mon[61345]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:14.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:14 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T20:20:14.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:14 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:14.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:14 vm05 ceph-mon[51870]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:14.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:14 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T20:20:14.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:14 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:15.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:15 vm05 ceph-mon[61345]: Deploying daemon osd.7 on vm09 2026-03-09T20:20:15.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:15 vm05 ceph-mon[51870]: Deploying daemon osd.7 on vm09 2026-03-09T20:20:15.962 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:15 vm09 ceph-mon[54524]: Deploying daemon osd.7 on vm09 2026-03-09T20:20:16.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:16 vm09 ceph-mon[54524]: pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:16.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:16 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:16.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:16 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:16.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:16 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:16.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:16 vm05 ceph-mon[61345]: pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:16.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:16 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:16.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:16 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:16.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:16 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:16.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:16 vm05 ceph-mon[51870]: pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:16.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:16 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:16.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:16 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:16.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:16 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.557 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 7 on host 'vm09' 2026-03-09T20:20:17.557 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:17.556+0000 7f061ffff640 1 -- 192.168.123.109:0/917183162 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 3398224787) 0x7f060c002bf0 con 0x7f0608078260 2026-03-09T20:20:17.561 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:17.560+0000 7f06451d9640 1 -- 192.168.123.109:0/917183162 >> v1:192.168.123.105:6800/3290461294 conn(0x7f0608078260 legacy=0x7f060807a720 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:17.561 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:17.560+0000 7f06451d9640 1 -- 192.168.123.109:0/917183162 >> v1:192.168.123.105:6789/0 conn(0x7f0640104990 legacy=0x7f064019bf70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:17.561 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:17.560+0000 7f06451d9640 1 -- 192.168.123.109:0/917183162 shutdown_connections 2026-03-09T20:20:17.561 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:17.560+0000 7f06451d9640 1 -- 192.168.123.109:0/917183162 >> 192.168.123.109:0/917183162 conn(0x7f0640100120 msgr2=0x7f064010b940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:17.561 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:17.560+0000 7f06451d9640 1 -- 192.168.123.109:0/917183162 shutdown_connections 2026-03-09T20:20:17.561 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:17.560+0000 7f06451d9640 1 -- 192.168.123.109:0/917183162 wait complete. 2026-03-09T20:20:17.706 DEBUG:teuthology.orchestra.run.vm09:osd.7> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.7.service 2026-03-09T20:20:17.707 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-09T20:20:17.707 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd stat -f json 2026-03-09T20:20:17.869 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:17.955 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:17 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.955 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:17 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.955 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:17 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:17.955 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:17 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:17.955 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:17 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.955 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:17 vm09 ceph-mon[54524]: from='osd.7 v1:192.168.123.109:6812/4141797613' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T20:20:17.955 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:17 vm09 ceph-mon[54524]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T20:20:17.955 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:17 vm09 ceph-mon[54524]: pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:17.955 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:17 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:17.955 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:17 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.955 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:17 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:17.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:17.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[51870]: from='osd.7 v1:192.168.123.109:6812/4141797613' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T20:20:17.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[51870]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T20:20:17.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[51870]: pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:17.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:17.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.984 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:17.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:17.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[61345]: from='osd.7 v1:192.168.123.109:6812/4141797613' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T20:20:17.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[61345]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T20:20:17.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[61345]: pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:17.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:17.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:17 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:17.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:17.998+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1389769796 >> v1:192.168.123.105:6789/0 conn(0x7fa864104990 legacy=0x7fa864104d90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:17.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:17.999+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1389769796 shutdown_connections 2026-03-09T20:20:17.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:17.999+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1389769796 >> 192.168.123.105:0/1389769796 conn(0x7fa864100120 msgr2=0x7fa864102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:17.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.000+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1389769796 shutdown_connections 2026-03-09T20:20:17.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.000+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1389769796 wait complete. 2026-03-09T20:20:17.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.000+0000 7fa86b8ce640 1 Processor -- start 2026-03-09T20:20:17.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.000+0000 7fa86b8ce640 1 -- start start 2026-03-09T20:20:18.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.000+0000 7fa86b8ce640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa86419c890 con 0x7fa864104990 2026-03-09T20:20:18.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.000+0000 7fa86b8ce640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa8641a8040 con 0x7fa86410cad0 2026-03-09T20:20:18.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.000+0000 7fa86b8ce640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa8641a9220 con 0x7fa864108dc0 2026-03-09T20:20:18.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.001+0000 7fa868e42640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7fa864108dc0 0x7fa8641a21e0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:51714/0 (socket says 192.168.123.105:51714) 2026-03-09T20:20:18.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.001+0000 7fa868e42640 1 -- 192.168.123.105:0/1466855256 learned_addr learned my addr 192.168.123.105:0/1466855256 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:18.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.001+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4170279172 0 0) 0x7fa8641a9220 con 0x7fa864108dc0 2026-03-09T20:20:18.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.001+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa840003620 con 0x7fa864108dc0 2026-03-09T20:20:18.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.001+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1581471455 0 0) 0x7fa8641a8040 con 0x7fa86410cad0 2026-03-09T20:20:18.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.001+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa8641a9220 con 0x7fa86410cad0 2026-03-09T20:20:18.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.001+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1490691225 0 0) 0x7fa840003620 con 0x7fa864108dc0 2026-03-09T20:20:18.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.001+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fa8641a8040 con 0x7fa864108dc0 2026-03-09T20:20:18.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.001+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fa854003150 con 0x7fa864108dc0 2026-03-09T20:20:18.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.001+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1672641107 0 0) 0x7fa8641a8040 con 0x7fa864108dc0 2026-03-09T20:20:18.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.001+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 >> v1:192.168.123.109:6789/0 conn(0x7fa86410cad0 legacy=0x7fa8641a5910 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:18.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.001+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 >> v1:192.168.123.105:6789/0 conn(0x7fa864104990 legacy=0x7fa86419bd10 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T20:20:18.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.002+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa8641aa400 con 0x7fa864108dc0 2026-03-09T20:20:18.002 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.002+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fa854003af0 con 0x7fa864108dc0 2026-03-09T20:20:18.002 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.002+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fa854005cc0 con 0x7fa864108dc0 2026-03-09T20:20:18.002 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.002+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1466855256 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fa8641a9450 con 0x7fa864108dc0 2026-03-09T20:20:18.002 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.002+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1466855256 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7fa8641a9a30 con 0x7fa864108dc0 2026-03-09T20:20:18.005 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.003+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1466855256 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa82c005180 con 0x7fa864108dc0 2026-03-09T20:20:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.004+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7fa8540036a0 con 0x7fa864108dc0 2026-03-09T20:20:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.004+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(44..44 src has 1..44) ==== 3905+0+0 (unknown 3213078624 0 0) 0x7fa854094740 con 0x7fa864108dc0 2026-03-09T20:20:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.007+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fa85405dff0 con 0x7fa864108dc0 2026-03-09T20:20:18.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.097+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1466855256 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd stat", "format": "json"} v 0) -- 0x7fa82c005470 con 0x7fa864108dc0 2026-03-09T20:20:18.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.097+0000 7fa8527fc640 1 -- 192.168.123.105:0/1466855256 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v44) ==== 74+0+130 (unknown 987155921 0 901013682) 0x7fa854061ca0 con 0x7fa864108dc0 2026-03-09T20:20:18.097 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:20:18.099 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.099+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1466855256 >> v1:192.168.123.105:6800/3290461294 conn(0x7fa8400782f0 legacy=0x7fa84007a7b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:18.099 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.100+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1466855256 >> v1:192.168.123.105:6790/0 conn(0x7fa864108dc0 legacy=0x7fa8641a21e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:18.099 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.100+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1466855256 shutdown_connections 2026-03-09T20:20:18.099 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.100+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1466855256 >> 192.168.123.105:0/1466855256 conn(0x7fa864100120 msgr2=0x7fa864109200 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:18.100 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.100+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1466855256 shutdown_connections 2026-03-09T20:20:18.100 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:18.100+0000 7fa86b8ce640 1 -- 192.168.123.105:0/1466855256 wait complete. 2026-03-09T20:20:18.247 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":44,"num_osds":8,"num_up_osds":7,"osd_up_since":1773087609,"num_in_osds":8,"osd_in_since":1773087609,"num_remapped_pgs":0} 2026-03-09T20:20:19.248 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd stat -f json 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: osdmap e44: 8 total, 7 up, 8 in 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='osd.7 v1:192.168.123.109:6812/4141797613' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1466855256' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: osdmap e44: 8 total, 7 up, 8 in 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='osd.7 v1:192.168.123.109:6812/4141797613' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1466855256' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:20:19.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:19.270 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:19.270 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:19.270 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:19.270 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:19.270 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:19.270 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:19.270 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:19.270 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:18 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: osdmap e44: 8 total, 7 up, 8 in 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='osd.7 v1:192.168.123.109:6812/4141797613' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1466855256' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:19.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:18 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:19.415 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:19.551 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.551+0000 7f7ff1005640 1 -- 192.168.123.105:0/1188146268 >> v1:192.168.123.105:6789/0 conn(0x7f7fec10a830 legacy=0x7f7fec10ccf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:19.553 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.552+0000 7f7ff1005640 1 -- 192.168.123.105:0/1188146268 shutdown_connections 2026-03-09T20:20:19.553 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.552+0000 7f7ff1005640 1 -- 192.168.123.105:0/1188146268 >> 192.168.123.105:0/1188146268 conn(0x7f7fec0fcd60 msgr2=0x7f7fec0ff1a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:19.553 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.554+0000 7f7ff1005640 1 -- 192.168.123.105:0/1188146268 shutdown_connections 2026-03-09T20:20:19.553 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.554+0000 7f7ff1005640 1 -- 192.168.123.105:0/1188146268 wait complete. 2026-03-09T20:20:19.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.554+0000 7f7ff1005640 1 Processor -- start 2026-03-09T20:20:19.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.555+0000 7f7ff1005640 1 -- start start 2026-03-09T20:20:19.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.555+0000 7f7ff1005640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7fec1a6d30 con 0x7f7fec10a830 2026-03-09T20:20:19.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.555+0000 7f7ff1005640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7fec1a7f30 con 0x7f7fec106cf0 2026-03-09T20:20:19.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.555+0000 7f7ff1005640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7fec1a9130 con 0x7f7fec1031e0 2026-03-09T20:20:19.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.555+0000 7f7feb577640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f7fec10a830 0x7f7fec1a5430 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:46940/0 (socket says 192.168.123.105:46940) 2026-03-09T20:20:19.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.555+0000 7f7feb577640 1 -- 192.168.123.105:0/2390867343 learned_addr learned my addr 192.168.123.105:0/2390867343 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:19.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.556+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 925684916 0 0) 0x7f7fec1a7f30 con 0x7f7fec106cf0 2026-03-09T20:20:19.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.556+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7fc4003620 con 0x7f7fec106cf0 2026-03-09T20:20:19.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.556+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 328784273 0 0) 0x7f7fec1a9130 con 0x7f7fec1031e0 2026-03-09T20:20:19.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.556+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7fec1a7f30 con 0x7f7fec1031e0 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.556+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 544968579 0 0) 0x7f7fec1a7f30 con 0x7f7fec1031e0 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.556+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f7fec1a9130 con 0x7f7fec1031e0 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.556+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f7fe0003070 con 0x7f7fec1031e0 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.556+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2305499556 0 0) 0x7f7fec1a6d30 con 0x7f7fec10a830 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.556+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7fec1a7f30 con 0x7f7fec10a830 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.557+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 250504832 0 0) 0x7f7fc4003620 con 0x7f7fec106cf0 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.557+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f7fec1a6d30 con 0x7f7fec106cf0 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.557+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f7fd8003030 con 0x7f7fec106cf0 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.557+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2218852606 0 0) 0x7f7fec1a7f30 con 0x7f7fec10a830 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.557+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f7fc4003620 con 0x7f7fec10a830 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.557+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 843675884 0 0) 0x7f7fec1a9130 con 0x7f7fec1031e0 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.557+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 >> v1:192.168.123.109:6789/0 conn(0x7f7fec106cf0 legacy=0x7f7fec1a1b60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.557+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 >> v1:192.168.123.105:6789/0 conn(0x7f7fec10a830 legacy=0x7f7fec1a5430 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:19.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.557+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7fec1aa330 con 0x7f7fec1031e0 2026-03-09T20:20:19.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.557+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f7fe0003660 con 0x7f7fec1031e0 2026-03-09T20:20:19.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.557+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f7fe00049f0 con 0x7f7fec1031e0 2026-03-09T20:20:19.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.558+0000 7f7ff1005640 1 -- 192.168.123.105:0/2390867343 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f7fec1a9360 con 0x7f7fec1031e0 2026-03-09T20:20:19.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.558+0000 7f7ff1005640 1 -- 192.168.123.105:0/2390867343 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f7fec1a9990 con 0x7f7fec1031e0 2026-03-09T20:20:19.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.559+0000 7f7ff1005640 1 -- 192.168.123.105:0/2390867343 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7fb8005180 con 0x7f7fec1031e0 2026-03-09T20:20:19.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.559+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f7fe0003210 con 0x7f7fec1031e0 2026-03-09T20:20:19.561 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.562+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(45..45 src has 1..45) ==== 3921+0+0 (unknown 575505581 0 0) 0x7f7fe0093330 con 0x7f7fec1031e0 2026-03-09T20:20:19.562 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.563+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f7fe005cbd0 con 0x7f7fec1031e0 2026-03-09T20:20:19.659 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.659+0000 7f7ff1005640 1 -- 192.168.123.105:0/2390867343 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd stat", "format": "json"} v 0) -- 0x7f7fb8005470 con 0x7f7fec1031e0 2026-03-09T20:20:19.659 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.660+0000 7f7fcffff640 1 -- 192.168.123.105:0/2390867343 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v45) ==== 74+0+130 (unknown 3291589750 0 948895776) 0x7f7fe0060880 con 0x7f7fec1031e0 2026-03-09T20:20:19.659 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:20:19.661 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.662+0000 7f7ff1005640 1 -- 192.168.123.105:0/2390867343 >> v1:192.168.123.105:6800/3290461294 conn(0x7f7fc4078480 legacy=0x7f7fc407a940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:19.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.662+0000 7f7ff1005640 1 -- 192.168.123.105:0/2390867343 >> v1:192.168.123.105:6790/0 conn(0x7f7fec1031e0 legacy=0x7f7fec1029e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:19.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.663+0000 7f7ff1005640 1 -- 192.168.123.105:0/2390867343 shutdown_connections 2026-03-09T20:20:19.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.663+0000 7f7ff1005640 1 -- 192.168.123.105:0/2390867343 >> 192.168.123.105:0/2390867343 conn(0x7f7fec0fcd60 msgr2=0x7f7fec10cc20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:19.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.663+0000 7f7ff1005640 1 -- 192.168.123.105:0/2390867343 shutdown_connections 2026-03-09T20:20:19.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:19.663+0000 7f7ff1005640 1 -- 192.168.123.105:0/2390867343 wait complete. 2026-03-09T20:20:19.853 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":45,"num_osds":8,"num_up_osds":7,"osd_up_since":1773087609,"num_in_osds":8,"osd_in_since":1773087609,"num_remapped_pgs":0} 2026-03-09T20:20:20.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[61345]: Detected new or changed devices on vm09 2026-03-09T20:20:20.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[61345]: Adjusting osd_memory_target on vm09 to 65804k 2026-03-09T20:20:20.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[61345]: Unable to set osd_memory_target on vm09 to 67384115: error parsing value: Value '67384115' is below minimum 939524096 2026-03-09T20:20:20.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[61345]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T20:20:20.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[61345]: osdmap e45: 8 total, 7 up, 8 in 2026-03-09T20:20:20.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:20.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:20.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[61345]: pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:20.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2390867343' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:20:20.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[61345]: from='osd.7 ' entity='osd.7' 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[61345]: osd.7 v1:192.168.123.109:6812/4141797613 boot 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[61345]: osdmap e46: 8 total, 8 up, 8 in 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[51870]: Detected new or changed devices on vm09 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[51870]: Adjusting osd_memory_target on vm09 to 65804k 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[51870]: Unable to set osd_memory_target on vm09 to 67384115: error parsing value: Value '67384115' is below minimum 939524096 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[51870]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[51870]: osdmap e45: 8 total, 7 up, 8 in 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[51870]: pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2390867343' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[51870]: from='osd.7 ' entity='osd.7' 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[51870]: osd.7 v1:192.168.123.109:6812/4141797613 boot 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[51870]: osdmap e46: 8 total, 8 up, 8 in 2026-03-09T20:20:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:19 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:20.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:19 vm09 ceph-mon[54524]: Detected new or changed devices on vm09 2026-03-09T20:20:20.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:19 vm09 ceph-mon[54524]: Adjusting osd_memory_target on vm09 to 65804k 2026-03-09T20:20:20.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:19 vm09 ceph-mon[54524]: Unable to set osd_memory_target on vm09 to 67384115: error parsing value: Value '67384115' is below minimum 939524096 2026-03-09T20:20:20.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:19 vm09 ceph-mon[54524]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T20:20:20.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:19 vm09 ceph-mon[54524]: osdmap e45: 8 total, 7 up, 8 in 2026-03-09T20:20:20.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:19 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:20.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:19 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:20.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:19 vm09 ceph-mon[54524]: pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T20:20:20.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2390867343' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:20:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:19 vm09 ceph-mon[54524]: from='osd.7 ' entity='osd.7' 2026-03-09T20:20:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:19 vm09 ceph-mon[54524]: osd.7 v1:192.168.123.109:6812/4141797613 boot 2026-03-09T20:20:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:19 vm09 ceph-mon[54524]: osdmap e46: 8 total, 8 up, 8 in 2026-03-09T20:20:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:19 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:20.273 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:20:19 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:20:19.830+0000 7f070b51a640 -1 osd.7 0 waiting for initial osdmap 2026-03-09T20:20:20.273 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:20:19 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:20:19.842+0000 7f0706b43640 -1 osd.7 45 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T20:20:20.854 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd stat -f json 2026-03-09T20:20:21.036 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:21.151 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:20 vm05 ceph-mon[61345]: purged_snaps scrub starts 2026-03-09T20:20:21.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:20 vm05 ceph-mon[61345]: purged_snaps scrub ok 2026-03-09T20:20:21.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:20 vm05 ceph-mon[61345]: osdmap e47: 8 total, 8 up, 8 in 2026-03-09T20:20:21.152 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:20 vm05 ceph-mon[51870]: purged_snaps scrub starts 2026-03-09T20:20:21.152 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:20 vm05 ceph-mon[51870]: purged_snaps scrub ok 2026-03-09T20:20:21.152 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:20 vm05 ceph-mon[51870]: osdmap e47: 8 total, 8 up, 8 in 2026-03-09T20:20:21.177 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.178+0000 7f5a2dec1640 1 -- 192.168.123.105:0/2142039643 >> v1:192.168.123.105:6790/0 conn(0x7f5a28069a50 legacy=0x7f5a28101ab0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:21.178 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.178+0000 7f5a2dec1640 1 -- 192.168.123.105:0/2142039643 shutdown_connections 2026-03-09T20:20:21.178 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.178+0000 7f5a2dec1640 1 -- 192.168.123.105:0/2142039643 >> 192.168.123.105:0/2142039643 conn(0x7f5a280fc020 msgr2=0x7f5a280fe480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:21.178 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.178+0000 7f5a2dec1640 1 -- 192.168.123.105:0/2142039643 shutdown_connections 2026-03-09T20:20:21.178 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.179+0000 7f5a2dec1640 1 -- 192.168.123.105:0/2142039643 wait complete. 2026-03-09T20:20:21.178 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.179+0000 7f5a2dec1640 1 Processor -- start 2026-03-09T20:20:21.178 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.179+0000 7f5a2dec1640 1 -- start start 2026-03-09T20:20:21.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.179+0000 7f5a2dec1640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5a281a6b20 con 0x7f5a28069a50 2026-03-09T20:20:21.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.179+0000 7f5a2dec1640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5a281a7d20 con 0x7f5a28102670 2026-03-09T20:20:21.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.179+0000 7f5a2dec1640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5a281a8f20 con 0x7f5a281021c0 2026-03-09T20:20:21.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.180+0000 7f5a27fff640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f5a28102670 0x7f5a281a5220 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:45748/0 (socket says 192.168.123.105:45748) 2026-03-09T20:20:21.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.180+0000 7f5a27fff640 1 -- 192.168.123.105:0/674979238 learned_addr learned my addr 192.168.123.105:0/674979238 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:21.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.179+0000 7f5a26ffd640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f5a281021c0 0x7f5a281a1950 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:48902/0 (socket says 192.168.123.105:48902) 2026-03-09T20:20:21.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.180+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 329306614 0 0) 0x7f5a281a6b20 con 0x7f5a28069a50 2026-03-09T20:20:21.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.180+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5a00003620 con 0x7f5a28069a50 2026-03-09T20:20:21.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.180+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2868903919 0 0) 0x7f5a00003620 con 0x7f5a28069a50 2026-03-09T20:20:21.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.180+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f5a281a6b20 con 0x7f5a28069a50 2026-03-09T20:20:21.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.180+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f5a0c003200 con 0x7f5a28069a50 2026-03-09T20:20:21.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.180+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2105357275 0 0) 0x7f5a281a6b20 con 0x7f5a28069a50 2026-03-09T20:20:21.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.180+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 >> v1:192.168.123.105:6790/0 conn(0x7f5a281021c0 legacy=0x7f5a281a1950 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:21.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.180+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 >> v1:192.168.123.109:6789/0 conn(0x7f5a28102670 legacy=0x7f5a281a5220 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:21.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.180+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5a281aa120 con 0x7f5a28069a50 2026-03-09T20:20:21.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.181+0000 7f5a2dec1640 1 -- 192.168.123.105:0/674979238 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f5a281a9150 con 0x7f5a28069a50 2026-03-09T20:20:21.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.181+0000 7f5a2dec1640 1 -- 192.168.123.105:0/674979238 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f5a281a9780 con 0x7f5a28069a50 2026-03-09T20:20:21.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.182+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f5a0c0027f0 con 0x7f5a28069a50 2026-03-09T20:20:21.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.182+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f5a0c004d60 con 0x7f5a28069a50 2026-03-09T20:20:21.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.182+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f5a0c004fc0 con 0x7f5a28069a50 2026-03-09T20:20:21.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.183+0000 7f5a2dec1640 1 -- 192.168.123.105:0/674979238 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f59ec005180 con 0x7f5a28069a50 2026-03-09T20:20:21.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.184+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(47..47 src has 1..47) ==== 4061+0+0 (unknown 1165997932 0 0) 0x7f5a0c0945a0 con 0x7f5a28069a50 2026-03-09T20:20:21.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.186+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f5a0c05df50 con 0x7f5a28069a50 2026-03-09T20:20:21.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:20 vm09 ceph-mon[54524]: purged_snaps scrub starts 2026-03-09T20:20:21.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:20 vm09 ceph-mon[54524]: purged_snaps scrub ok 2026-03-09T20:20:21.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:20 vm09 ceph-mon[54524]: osdmap e47: 8 total, 8 up, 8 in 2026-03-09T20:20:21.280 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.281+0000 7f5a2dec1640 1 -- 192.168.123.105:0/674979238 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd stat", "format": "json"} v 0) -- 0x7f59ec005470 con 0x7f5a28069a50 2026-03-09T20:20:21.281 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.281+0000 7f5a24ff9640 1 -- 192.168.123.105:0/674979238 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v47) ==== 74+0+130 (unknown 1007884745 0 1521389820) 0x7f5a0c061c00 con 0x7f5a28069a50 2026-03-09T20:20:21.281 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:20:21.283 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.283+0000 7f5a2dec1640 1 -- 192.168.123.105:0/674979238 >> v1:192.168.123.105:6800/3290461294 conn(0x7f5a00078220 legacy=0x7f5a0007a6e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:21.283 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.284+0000 7f5a2dec1640 1 -- 192.168.123.105:0/674979238 >> v1:192.168.123.105:6789/0 conn(0x7f5a28069a50 legacy=0x7f5a28104760 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:21.283 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.284+0000 7f5a2dec1640 1 -- 192.168.123.105:0/674979238 shutdown_connections 2026-03-09T20:20:21.283 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.284+0000 7f5a2dec1640 1 -- 192.168.123.105:0/674979238 >> 192.168.123.105:0/674979238 conn(0x7f5a280fc020 msgr2=0x7f5a280fc860 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:21.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.284+0000 7f5a2dec1640 1 -- 192.168.123.105:0/674979238 shutdown_connections 2026-03-09T20:20:21.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.284+0000 7f5a2dec1640 1 -- 192.168.123.105:0/674979238 wait complete. 2026-03-09T20:20:21.456 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":47,"num_osds":8,"num_up_osds":8,"osd_up_since":1773087619,"num_in_osds":8,"osd_in_since":1773087609,"num_remapped_pgs":0} 2026-03-09T20:20:21.456 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd dump --format=json 2026-03-09T20:20:21.626 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:21.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.747+0000 7fa567f1f640 1 -- 192.168.123.105:0/741218899 >> v1:192.168.123.105:6790/0 conn(0x7fa56010a910 legacy=0x7fa56010acf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:21.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.748+0000 7fa567f1f640 1 -- 192.168.123.105:0/741218899 shutdown_connections 2026-03-09T20:20:21.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.748+0000 7fa567f1f640 1 -- 192.168.123.105:0/741218899 >> 192.168.123.105:0/741218899 conn(0x7fa5601005f0 msgr2=0x7fa560102a10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:21.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.748+0000 7fa567f1f640 1 -- 192.168.123.105:0/741218899 shutdown_connections 2026-03-09T20:20:21.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.748+0000 7fa567f1f640 1 -- 192.168.123.105:0/741218899 wait complete. 2026-03-09T20:20:21.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.748+0000 7fa567f1f640 1 Processor -- start 2026-03-09T20:20:21.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.749+0000 7fa567f1f640 1 -- start start 2026-03-09T20:20:21.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.749+0000 7fa567f1f640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa5601ab8c0 con 0x7fa560111360 2026-03-09T20:20:21.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.749+0000 7fa567f1f640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa5601acac0 con 0x7fa56010a910 2026-03-09T20:20:21.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.749+0000 7fa567f1f640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa5601adcc0 con 0x7fa56010d7c0 2026-03-09T20:20:21.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.749+0000 7fa566495640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fa560111360 0x7fa5601a9fc0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:59976/0 (socket says 192.168.123.105:59976) 2026-03-09T20:20:21.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.749+0000 7fa566495640 1 -- 192.168.123.105:0/1040954370 learned_addr learned my addr 192.168.123.105:0/1040954370 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:21.749 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.750+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1305096268 0 0) 0x7fa5601ab8c0 con 0x7fa560111360 2026-03-09T20:20:21.749 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.750+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa53c003620 con 0x7fa560111360 2026-03-09T20:20:21.749 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.750+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2943030864 0 0) 0x7fa5601acac0 con 0x7fa56010a910 2026-03-09T20:20:21.749 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.750+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa5601ab8c0 con 0x7fa56010a910 2026-03-09T20:20:21.749 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.750+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 530923814 0 0) 0x7fa5601adcc0 con 0x7fa56010d7c0 2026-03-09T20:20:21.749 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.750+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa5601acac0 con 0x7fa56010d7c0 2026-03-09T20:20:21.749 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.750+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4226106265 0 0) 0x7fa53c003620 con 0x7fa560111360 2026-03-09T20:20:21.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.750+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fa5601adcc0 con 0x7fa560111360 2026-03-09T20:20:21.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.750+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fa55c003500 con 0x7fa560111360 2026-03-09T20:20:21.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.750+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3373006540 0 0) 0x7fa5601ab8c0 con 0x7fa56010a910 2026-03-09T20:20:21.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.750+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fa53c003620 con 0x7fa56010a910 2026-03-09T20:20:21.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.750+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fa554003180 con 0x7fa56010a910 2026-03-09T20:20:21.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.751+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1074757454 0 0) 0x7fa5601acac0 con 0x7fa56010d7c0 2026-03-09T20:20:21.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.751+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fa5601ab8c0 con 0x7fa56010d7c0 2026-03-09T20:20:21.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.751+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 306136188 0 0) 0x7fa5601adcc0 con 0x7fa560111360 2026-03-09T20:20:21.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.751+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 >> v1:192.168.123.105:6790/0 conn(0x7fa56010d7c0 legacy=0x7fa5601a6650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:21.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.751+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 >> v1:192.168.123.109:6789/0 conn(0x7fa56010a910 legacy=0x7fa560110be0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:21.751 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.751+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa5601aeec0 con 0x7fa560111360 2026-03-09T20:20:21.751 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.751+0000 7fa567f1f640 1 -- 192.168.123.105:0/1040954370 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fa5601accf0 con 0x7fa560111360 2026-03-09T20:20:21.751 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.751+0000 7fa567f1f640 1 -- 192.168.123.105:0/1040954370 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fa5601ad250 con 0x7fa560111360 2026-03-09T20:20:21.752 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.752+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fa55c003ee0 con 0x7fa560111360 2026-03-09T20:20:21.752 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.752+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fa55c005f90 con 0x7fa560111360 2026-03-09T20:20:21.752 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.752+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7fa55c01e8e0 con 0x7fa560111360 2026-03-09T20:20:21.752 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.753+0000 7fa567f1f640 1 -- 192.168.123.105:0/1040954370 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa528005180 con 0x7fa560111360 2026-03-09T20:20:21.752 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.753+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(47..47 src has 1..47) ==== 4061+0+0 (unknown 1165997932 0 0) 0x7fa55c0949b0 con 0x7fa560111360 2026-03-09T20:20:21.756 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.756+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fa55c05ef50 con 0x7fa560111360 2026-03-09T20:20:21.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.852+0000 7fa567f1f640 1 -- 192.168.123.105:0/1040954370 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7fa528005470 con 0x7fa560111360 2026-03-09T20:20:21.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.853+0000 7fa54effd640 1 -- 192.168.123.105:0/1040954370 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v47) ==== 74+0+11721 (unknown 1331009764 0 612048026) 0x7fa55c062c00 con 0x7fa560111360 2026-03-09T20:20:21.852 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:20:21.853 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":47,"fsid":"c0151936-1bf4-11f1-b896-23f7bea8a6ea","created":"2026-03-09T20:17:54.449051+0000","modified":"2026-03-09T20:20:20.943247+0000","last_up_change":"2026-03-09T20:20:19.940896+0000","last_in_change":"2026-03-09T20:20:09.342882+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T20:19:21.537102+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"35c6a684-ee69-44bf-83ae-27ddd2fd2486","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":46,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6801","nonce":1625499026}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6802","nonce":1625499026}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6804","nonce":1625499026}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6803","nonce":1625499026}]},"public_addr":"192.168.123.105:6801/1625499026","cluster_addr":"192.168.123.105:6802/1625499026","heartbeat_back_addr":"192.168.123.105:6804/1625499026","heartbeat_front_addr":"192.168.123.105:6803/1625499026","state":["exists","up"]},{"osd":1,"uuid":"4a3ff444-017e-44cd-9222-93f1d8dcc4db","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":31,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6805","nonce":3664200689}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6806","nonce":3664200689}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6808","nonce":3664200689}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6807","nonce":3664200689}]},"public_addr":"192.168.123.105:6805/3664200689","cluster_addr":"192.168.123.105:6806/3664200689","heartbeat_back_addr":"192.168.123.105:6808/3664200689","heartbeat_front_addr":"192.168.123.105:6807/3664200689","state":["exists","up"]},{"osd":2,"uuid":"58868a45-388a-4244-bde9-e525f4e2b7d5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6809","nonce":1060255430}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6810","nonce":1060255430}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6812","nonce":1060255430}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6811","nonce":1060255430}]},"public_addr":"192.168.123.105:6809/1060255430","cluster_addr":"192.168.123.105:6810/1060255430","heartbeat_back_addr":"192.168.123.105:6812/1060255430","heartbeat_front_addr":"192.168.123.105:6811/1060255430","state":["exists","up"]},{"osd":3,"uuid":"4c40929b-9b22-486e-aed2-a111cbaa96da","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6813","nonce":4176641888}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6814","nonce":4176641888}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6816","nonce":4176641888}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6815","nonce":4176641888}]},"public_addr":"192.168.123.105:6813/4176641888","cluster_addr":"192.168.123.105:6814/4176641888","heartbeat_back_addr":"192.168.123.105:6816/4176641888","heartbeat_front_addr":"192.168.123.105:6815/4176641888","state":["exists","up"]},{"osd":4,"uuid":"acddd4eb-0110-4992-a3c7-201ba9fd8f8e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":30,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6800","nonce":4063967321}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6801","nonce":4063967321}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6803","nonce":4063967321}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6802","nonce":4063967321}]},"public_addr":"192.168.123.109:6800/4063967321","cluster_addr":"192.168.123.109:6801/4063967321","heartbeat_back_addr":"192.168.123.109:6803/4063967321","heartbeat_front_addr":"192.168.123.109:6802/4063967321","state":["exists","up"]},{"osd":5,"uuid":"61fedd79-419a-4176-9825-9d059c9d73f0","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":37,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6804","nonce":3558334635}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6805","nonce":3558334635}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6807","nonce":3558334635}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6806","nonce":3558334635}]},"public_addr":"192.168.123.109:6804/3558334635","cluster_addr":"192.168.123.109:6805/3558334635","heartbeat_back_addr":"192.168.123.109:6807/3558334635","heartbeat_front_addr":"192.168.123.109:6806/3558334635","state":["exists","up"]},{"osd":6,"uuid":"d4965700-0e14-493b-8c85-282e7ba1da51","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":41,"up_thru":42,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6808","nonce":3079043049}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6809","nonce":3079043049}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6811","nonce":3079043049}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6810","nonce":3079043049}]},"public_addr":"192.168.123.109:6808/3079043049","cluster_addr":"192.168.123.109:6809/3079043049","heartbeat_back_addr":"192.168.123.109:6811/3079043049","heartbeat_front_addr":"192.168.123.109:6810/3079043049","state":["exists","up"]},{"osd":7,"uuid":"ae4f5298-3a65-4f5e-b653-7ee92ac3f2a9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":46,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6812","nonce":4141797613}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6813","nonce":4141797613}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6815","nonce":4141797613}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6814","nonce":4141797613}]},"public_addr":"192.168.123.109:6812/4141797613","cluster_addr":"192.168.123.109:6813/4141797613","heartbeat_back_addr":"192.168.123.109:6815/4141797613","heartbeat_front_addr":"192.168.123.109:6814/4141797613","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:18:55.880962+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:07.124566+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:18.432517+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:29.741981+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:42.702664+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:55.089147+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:20:07.303163+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:20:18.384854+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:6800/1901557444":"2026-03-10T20:18:17.477725+0000","192.168.123.105:0/4136016323":"2026-03-10T20:18:17.477725+0000","192.168.123.105:0/3703967877":"2026-03-10T20:18:17.477725+0000","192.168.123.105:0/4146364495":"2026-03-10T20:18:06.314330+0000","192.168.123.105:0/3832503883":"2026-03-10T20:18:06.314330+0000","192.168.123.105:0/3398073401":"2026-03-10T20:18:06.314330+0000","192.168.123.105:0/2964833350":"2026-03-10T20:18:17.477725+0000","192.168.123.105:6800/4277841438":"2026-03-10T20:18:06.314330+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T20:20:21.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.855+0000 7fa567f1f640 1 -- 192.168.123.105:0/1040954370 >> v1:192.168.123.105:6800/3290461294 conn(0x7fa53c0783e0 legacy=0x7fa53c07a8a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:21.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.855+0000 7fa567f1f640 1 -- 192.168.123.105:0/1040954370 >> v1:192.168.123.105:6789/0 conn(0x7fa560111360 legacy=0x7fa5601a9fc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:21.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.856+0000 7fa567f1f640 1 -- 192.168.123.105:0/1040954370 shutdown_connections 2026-03-09T20:20:21.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.856+0000 7fa567f1f640 1 -- 192.168.123.105:0/1040954370 >> 192.168.123.105:0/1040954370 conn(0x7fa5601005f0 msgr2=0x7fa560103680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:21.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.856+0000 7fa567f1f640 1 -- 192.168.123.105:0/1040954370 shutdown_connections 2026-03-09T20:20:21.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:21.856+0000 7fa567f1f640 1 -- 192.168.123.105:0/1040954370 wait complete. 2026-03-09T20:20:22.021 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-09T20:19:21.537102+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-09T20:20:22.021 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd pool get .mgr pg_num 2026-03-09T20:20:22.199 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:22.221 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/674979238' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:20:22.221 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:21 vm05 ceph-mon[61345]: pgmap v99: 1 pgs: 1 peering; 0 B data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:22.221 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1040954370' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:20:22.221 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/674979238' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:20:22.221 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:21 vm05 ceph-mon[51870]: pgmap v99: 1 pgs: 1 peering; 0 B data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:22.221 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1040954370' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:20:22.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/674979238' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:20:22.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:21 vm09 ceph-mon[54524]: pgmap v99: 1 pgs: 1 peering; 0 B data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:22.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1040954370' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:20:22.324 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.325+0000 7f8208018640 1 -- 192.168.123.105:0/552885282 >> v1:192.168.123.105:6789/0 conn(0x7f820010a910 legacy=0x7f820010acf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:22.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.325+0000 7f8208018640 1 -- 192.168.123.105:0/552885282 shutdown_connections 2026-03-09T20:20:22.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.325+0000 7f8208018640 1 -- 192.168.123.105:0/552885282 >> 192.168.123.105:0/552885282 conn(0x7f82001005f0 msgr2=0x7f8200102a10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:22.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.325+0000 7f8208018640 1 -- 192.168.123.105:0/552885282 shutdown_connections 2026-03-09T20:20:22.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.325+0000 7f8208018640 1 -- 192.168.123.105:0/552885282 wait complete. 2026-03-09T20:20:22.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.326+0000 7f8208018640 1 Processor -- start 2026-03-09T20:20:22.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.326+0000 7f8208018640 1 -- start start 2026-03-09T20:20:22.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.326+0000 7f8208018640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8200110cf0 con 0x7f8200111360 2026-03-09T20:20:22.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.326+0000 7f8208018640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8200110ec0 con 0x7f820010d7c0 2026-03-09T20:20:22.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.326+0000 7f8208018640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8200111090 con 0x7f820010a910 2026-03-09T20:20:22.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.326+0000 7f8205d8d640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f820010a910 0x7f820010de50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:48938/0 (socket says 192.168.123.105:48938) 2026-03-09T20:20:22.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.326+0000 7f8205d8d640 1 -- 192.168.123.105:0/3947604958 learned_addr learned my addr 192.168.123.105:0/3947604958 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.327+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3600430546 0 0) 0x7f8200111090 con 0x7f820010a910 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.327+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f81d8003620 con 0x7f820010a910 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.327+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 260455582 0 0) 0x7f8200110cf0 con 0x7f8200111360 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.327+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8200111090 con 0x7f8200111360 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.327+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 414626620 0 0) 0x7f8200110ec0 con 0x7f820010d7c0 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.327+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8200110cf0 con 0x7f820010d7c0 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.327+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 491421821 0 0) 0x7f81d8003620 con 0x7f820010a910 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.327+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f8200110ec0 con 0x7f820010a910 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.327+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f81e8003230 con 0x7f820010a910 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.328+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3640851196 0 0) 0x7f8200111090 con 0x7f8200111360 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.328+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f81d8003620 con 0x7f8200111360 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.328+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f81fc0033c0 con 0x7f8200111360 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.328+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1341836302 0 0) 0x7f8200110cf0 con 0x7f820010d7c0 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.328+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f8200111090 con 0x7f820010d7c0 2026-03-09T20:20:22.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.328+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f81f0003140 con 0x7f820010d7c0 2026-03-09T20:20:22.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.328+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3376767920 0 0) 0x7f8200110ec0 con 0x7f820010a910 2026-03-09T20:20:22.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.328+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 >> v1:192.168.123.109:6789/0 conn(0x7f820010d7c0 legacy=0x7f820010e560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:22.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.328+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 >> v1:192.168.123.105:6789/0 conn(0x7f8200111360 legacy=0x7f82001af050 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:22.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.328+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f82001b37a0 con 0x7f820010a910 2026-03-09T20:20:22.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.328+0000 7f8208018640 1 -- 192.168.123.105:0/3947604958 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f82001b0770 con 0x7f820010a910 2026-03-09T20:20:22.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.328+0000 7f8208018640 1 -- 192.168.123.105:0/3947604958 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f82001b0d20 con 0x7f820010a910 2026-03-09T20:20:22.329 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.329+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f81e8003cc0 con 0x7f820010a910 2026-03-09T20:20:22.329 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.329+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f81e80052c0 con 0x7f820010a910 2026-03-09T20:20:22.329 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.330+0000 7f8208018640 1 -- 192.168.123.105:0/3947604958 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f81c8005180 con 0x7f820010a910 2026-03-09T20:20:22.332 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.330+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f81e80055c0 con 0x7f820010a910 2026-03-09T20:20:22.332 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.330+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(48..48 src has 1..48) ==== 4061+0+0 (unknown 4047596570 0 0) 0x7f81e8094740 con 0x7f820010a910 2026-03-09T20:20:22.332 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.333+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f81e805df60 con 0x7f820010a910 2026-03-09T20:20:22.428 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.429+0000 7f8208018640 1 -- 192.168.123.105:0/3947604958 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"} v 0) -- 0x7f81c8005d40 con 0x7f820010a910 2026-03-09T20:20:22.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.430+0000 7f81f6ffd640 1 -- 192.168.123.105:0/3947604958 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]=0 v48) ==== 93+0+10 (unknown 2713261361 0 2170607528) 0x7f81e8061c10 con 0x7f820010a910 2026-03-09T20:20:22.429 INFO:teuthology.orchestra.run.vm05.stdout:pg_num: 1 2026-03-09T20:20:22.431 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.432+0000 7f8208018640 1 -- 192.168.123.105:0/3947604958 >> v1:192.168.123.105:6800/3290461294 conn(0x7f81d8078420 legacy=0x7f81d807a8e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:22.431 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.432+0000 7f8208018640 1 -- 192.168.123.105:0/3947604958 >> v1:192.168.123.105:6790/0 conn(0x7f820010a910 legacy=0x7f820010de50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:22.431 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.432+0000 7f8208018640 1 -- 192.168.123.105:0/3947604958 shutdown_connections 2026-03-09T20:20:22.431 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.432+0000 7f8208018640 1 -- 192.168.123.105:0/3947604958 >> 192.168.123.105:0/3947604958 conn(0x7f82001005f0 msgr2=0x7f82001147a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:22.432 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.432+0000 7f8208018640 1 -- 192.168.123.105:0/3947604958 shutdown_connections 2026-03-09T20:20:22.432 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:22.432+0000 7f8208018640 1 -- 192.168.123.105:0/3947604958 wait complete. 2026-03-09T20:20:22.600 INFO:tasks.cephadm:Adding ceph.rgw.foo.a on vm05 2026-03-09T20:20:22.600 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch apply rgw foo.a --placement '1;vm05=foo.a' 2026-03-09T20:20:22.784 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:20:22.915 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.914+0000 7f695960d640 1 -- 192.168.123.109:0/3350090098 >> v1:192.168.123.109:6789/0 conn(0x7f6954104990 legacy=0x7f6954104d90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:22.915 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.915+0000 7f695960d640 1 -- 192.168.123.109:0/3350090098 shutdown_connections 2026-03-09T20:20:22.915 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.915+0000 7f695960d640 1 -- 192.168.123.109:0/3350090098 >> 192.168.123.109:0/3350090098 conn(0x7f6954100120 msgr2=0x7f6954102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:22.915 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.915+0000 7f695960d640 1 -- 192.168.123.109:0/3350090098 shutdown_connections 2026-03-09T20:20:22.916 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.915+0000 7f695960d640 1 -- 192.168.123.109:0/3350090098 wait complete. 2026-03-09T20:20:22.916 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.916+0000 7f695960d640 1 Processor -- start 2026-03-09T20:20:22.916 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.916+0000 7f695960d640 1 -- start start 2026-03-09T20:20:22.916 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.916+0000 7f695960d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6954078010 con 0x7f6954108dc0 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.916+0000 7f695960d640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f69540781e0 con 0x7f695410cad0 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.916+0000 7f695960d640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f69540783b0 con 0x7f6954104990 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.916+0000 7f69527fc640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f6954108dc0 0x7f6954077900 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:45802/0 (socket says 192.168.123.109:45802) 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.916+0000 7f69527fc640 1 -- 192.168.123.109:0/2846762786 learned_addr learned my addr 192.168.123.109:0/2846762786 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1849401778 0 0) 0x7f69540781e0 con 0x7f695410cad0 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f692c003620 con 0x7f695410cad0 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3857213755 0 0) 0x7f6954078010 con 0x7f6954108dc0 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f69540781e0 con 0x7f6954108dc0 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3863191860 0 0) 0x7f69540783b0 con 0x7f6954104990 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6954078010 con 0x7f6954104990 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1336384959 0 0) 0x7f692c003620 con 0x7f695410cad0 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f69540783b0 con 0x7f695410cad0 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f69440030a0 con 0x7f695410cad0 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 30715153 0 0) 0x7f69540781e0 con 0x7f6954108dc0 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f692c003620 con 0x7f6954108dc0 2026-03-09T20:20:22.917 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f69480040d0 con 0x7f6954108dc0 2026-03-09T20:20:22.918 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2297732094 0 0) 0x7f6954078010 con 0x7f6954104990 2026-03-09T20:20:22.918 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f69540781e0 con 0x7f6954104990 2026-03-09T20:20:22.918 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f693c0030c0 con 0x7f6954104990 2026-03-09T20:20:22.918 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2255721800 0 0) 0x7f69540783b0 con 0x7f695410cad0 2026-03-09T20:20:22.918 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 >> v1:192.168.123.105:6790/0 conn(0x7f6954104990 legacy=0x7f695407adc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:22.918 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 >> v1:192.168.123.105:6789/0 conn(0x7f6954108dc0 legacy=0x7f6954077900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:22.918 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.917+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f69541ae980 con 0x7f695410cad0 2026-03-09T20:20:22.919 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.918+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f6944003d10 con 0x7f695410cad0 2026-03-09T20:20:22.919 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.918+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6944004d70 con 0x7f695410cad0 2026-03-09T20:20:22.919 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.918+0000 7f695960d640 1 -- 192.168.123.109:0/2846762786 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f69541ab950 con 0x7f695410cad0 2026-03-09T20:20:22.919 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.918+0000 7f695960d640 1 -- 192.168.123.109:0/2846762786 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f69541abf80 con 0x7f695410cad0 2026-03-09T20:20:22.919 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.919+0000 7f695960d640 1 -- 192.168.123.109:0/2846762786 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6920005180 con 0x7f695410cad0 2026-03-09T20:20:22.923 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.922+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f69440035f0 con 0x7f695410cad0 2026-03-09T20:20:22.923 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.923+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(48..48 src has 1..48) ==== 4061+0+0 (unknown 4047596570 0 0) 0x7f6944093700 con 0x7f695410cad0 2026-03-09T20:20:22.923 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:22.923+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f6944093bb0 con 0x7f695410cad0 2026-03-09T20:20:23.021 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.020+0000 7f695960d640 1 -- 192.168.123.109:0/2846762786 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm05=foo.a", "target": ["mon-mgr", ""]}) -- 0x7f6920002bf0 con 0x7f692c0784f0 2026-03-09T20:20:23.028 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.028+0000 7f6933fff640 1 -- 192.168.123.109:0/2846762786 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+30 (unknown 0 0 1123153589) 0x7f6920002bf0 con 0x7f692c0784f0 2026-03-09T20:20:23.028 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled rgw.foo.a update... 2026-03-09T20:20:23.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.030+0000 7f695960d640 1 -- 192.168.123.109:0/2846762786 >> v1:192.168.123.105:6800/3290461294 conn(0x7f692c0784f0 legacy=0x7f692c07a9b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:23.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.030+0000 7f695960d640 1 -- 192.168.123.109:0/2846762786 >> v1:192.168.123.109:6789/0 conn(0x7f695410cad0 legacy=0x7f69541aa230 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:23.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.031+0000 7f695960d640 1 -- 192.168.123.109:0/2846762786 shutdown_connections 2026-03-09T20:20:23.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.031+0000 7f695960d640 1 -- 192.168.123.109:0/2846762786 >> 192.168.123.109:0/2846762786 conn(0x7f6954100120 msgr2=0x7f695410b940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:23.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.031+0000 7f695960d640 1 -- 192.168.123.109:0/2846762786 shutdown_connections 2026-03-09T20:20:23.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.031+0000 7f695960d640 1 -- 192.168.123.109:0/2846762786 wait complete. 2026-03-09T20:20:23.177 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:22 vm09 ceph-mon[54524]: osdmap e48: 8 total, 8 up, 8 in 2026-03-09T20:20:23.177 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3947604958' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T20:20:23.208 DEBUG:teuthology.orchestra.run.vm05:rgw.foo.a> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@rgw.foo.a.service 2026-03-09T20:20:23.209 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm09 2026-03-09T20:20:23.210 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd pool create datapool 3 3 replicated 2026-03-09T20:20:23.241 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:22 vm05 ceph-mon[51870]: osdmap e48: 8 total, 8 up, 8 in 2026-03-09T20:20:23.241 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3947604958' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T20:20:23.241 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:22 vm05 ceph-mon[61345]: osdmap e48: 8 total, 8 up, 8 in 2026-03-09T20:20:23.241 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3947604958' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T20:20:23.396 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:20:23.542 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.540+0000 7f622c403640 1 -- 192.168.123.109:0/1944675604 >> v1:192.168.123.109:6789/0 conn(0x7f62241049b0 legacy=0x7f6224104db0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:23.542 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.541+0000 7f622c403640 1 -- 192.168.123.109:0/1944675604 shutdown_connections 2026-03-09T20:20:23.542 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.541+0000 7f622c403640 1 -- 192.168.123.109:0/1944675604 >> 192.168.123.109:0/1944675604 conn(0x7f6224100120 msgr2=0x7f6224102580 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:23.542 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.542+0000 7f622c403640 1 -- 192.168.123.109:0/1944675604 shutdown_connections 2026-03-09T20:20:23.542 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.542+0000 7f622c403640 1 -- 192.168.123.109:0/1944675604 wait complete. 2026-03-09T20:20:23.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.542+0000 7f622c403640 1 Processor -- start 2026-03-09T20:20:23.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.542+0000 7f622c403640 1 -- start start 2026-03-09T20:20:23.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.543+0000 7f622c403640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f622419cab0 con 0x7f6224108de0 2026-03-09T20:20:23.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.543+0000 7f622c403640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f62241a8270 con 0x7f62241049b0 2026-03-09T20:20:23.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.543+0000 7f622c403640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f62241a9450 con 0x7f622410caf0 2026-03-09T20:20:23.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.543+0000 7f6229977640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f6224108de0 0x7f62241a2410 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:45832/0 (socket says 192.168.123.109:45832) 2026-03-09T20:20:23.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.543+0000 7f6229977640 1 -- 192.168.123.109:0/2448962654 learned_addr learned my addr 192.168.123.109:0/2448962654 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.543+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1314414718 0 0) 0x7f622419cab0 con 0x7f6224108de0 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.543+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f61fc003620 con 0x7f6224108de0 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.543+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2073052068 0 0) 0x7f62241a8270 con 0x7f62241049b0 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.543+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f622419cab0 con 0x7f62241049b0 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.543+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1748738958 0 0) 0x7f62241a9450 con 0x7f622410caf0 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.543+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f62241a8270 con 0x7f622410caf0 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.544+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1938751015 0 0) 0x7f61fc003620 con 0x7f6224108de0 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.544+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f62241a9450 con 0x7f6224108de0 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.544+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 453859994 0 0) 0x7f622419cab0 con 0x7f62241049b0 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.544+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f61fc003620 con 0x7f62241049b0 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.544+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6214004100 con 0x7f62241049b0 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.544+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 856446977 0 0) 0x7f62241a8270 con 0x7f622410caf0 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.544+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f622419cab0 con 0x7f622410caf0 2026-03-09T20:20:23.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.544+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6220003380 con 0x7f622410caf0 2026-03-09T20:20:23.545 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.544+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f620c003140 con 0x7f6224108de0 2026-03-09T20:20:23.545 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.544+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 405080536 0 0) 0x7f622419cab0 con 0x7f622410caf0 2026-03-09T20:20:23.545 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.544+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 >> v1:192.168.123.109:6789/0 conn(0x7f62241049b0 legacy=0x7f622419bf30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:23.545 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.544+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 >> v1:192.168.123.105:6789/0 conn(0x7f6224108de0 legacy=0x7f62241a2410 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:23.545 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.544+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f62241aa630 con 0x7f622410caf0 2026-03-09T20:20:23.545 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.545+0000 7f622c403640 1 -- 192.168.123.109:0/2448962654 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f62241a9680 con 0x7f622410caf0 2026-03-09T20:20:23.545 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.545+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f6220004880 con 0x7f622410caf0 2026-03-09T20:20:23.545 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.545+0000 7f622c403640 1 -- 192.168.123.109:0/2448962654 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f62241a9c60 con 0x7f622410caf0 2026-03-09T20:20:23.546 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.545+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6220004d20 con 0x7f622410caf0 2026-03-09T20:20:23.546 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.545+0000 7f622c403640 1 -- 192.168.123.109:0/2448962654 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f61ec005180 con 0x7f622410caf0 2026-03-09T20:20:23.548 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.547+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f6220003520 con 0x7f622410caf0 2026-03-09T20:20:23.548 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.547+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(48..48 src has 1..48) ==== 4061+0+0 (unknown 4047596570 0 0) 0x7f6220093670 con 0x7f622410caf0 2026-03-09T20:20:23.549 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.549+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f622005ce90 con 0x7f622410caf0 2026-03-09T20:20:23.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:23.646+0000 7f622c403640 1 -- 192.168.123.109:0/2448962654 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"} v 0) -- 0x7f61ec005470 con 0x7f622410caf0 2026-03-09T20:20:23.910 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 09 20:20:23 vm05 systemd[1]: Starting Ceph rgw.foo.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:20:24.079 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:24.077+0000 7f621b7fe640 1 -- 192.168.123.109:0/2448962654 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]=0 pool 'datapool' created v49) ==== 160+0+0 (unknown 2280981637 0 0) 0x7f6220060b40 con 0x7f622410caf0 2026-03-09T20:20:24.079 INFO:teuthology.orchestra.run.vm09.stderr:pool 'datapool' created 2026-03-09T20:20:24.081 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:24.079+0000 7f622c403640 1 -- 192.168.123.109:0/2448962654 >> v1:192.168.123.105:6800/3290461294 conn(0x7f61fc078420 legacy=0x7f61fc07a8e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:24.081 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:24.079+0000 7f622c403640 1 -- 192.168.123.109:0/2448962654 >> v1:192.168.123.105:6790/0 conn(0x7f622410caf0 legacy=0x7f62241a5b40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:24.083 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:24.083+0000 7f622c403640 1 -- 192.168.123.109:0/2448962654 shutdown_connections 2026-03-09T20:20:24.083 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:24.083+0000 7f622c403640 1 -- 192.168.123.109:0/2448962654 >> 192.168.123.109:0/2448962654 conn(0x7f6224100120 msgr2=0x7f6224109220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:24.083 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:24.083+0000 7f622c403640 1 -- 192.168.123.109:0/2448962654 shutdown_connections 2026-03-09T20:20:24.083 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:24.083+0000 7f622c403640 1 -- 192.168.123.109:0/2448962654 wait complete. 2026-03-09T20:20:24.232 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- rbd pool init datapool 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='client.24349 v1:192.168.123.109:0/2846762786' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm05=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: Saving service rgw.foo.a spec with placement vm05=foo.a;count:1 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: Deploying daemon rgw.foo.a on vm05 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: pgmap v101: 1 pgs: 1 peering; 0 B data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/2448962654' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='client.24338 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='client.24349 v1:192.168.123.109:0/2846762786' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm05=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: Saving service rgw.foo.a spec with placement vm05=foo.a;count:1 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:24.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T20:20:24.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T20:20:24.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:24.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: Deploying daemon rgw.foo.a on vm05 2026-03-09T20:20:24.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: pgmap v101: 1 pgs: 1 peering; 0 B data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:24.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/2448962654' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T20:20:24.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='client.24338 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T20:20:24.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:24 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.411 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 09 20:20:23 vm05 podman[86156]: 2026-03-09 20:20:23.911588797 +0000 UTC m=+0.017723215 container create 87d27b7ebb480d0ad1c0b10d4054705139bb2ec732bdb61c2ab6e97f528124a6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-rgw-foo-a, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T20:20:24.411 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 09 20:20:23 vm05 podman[86156]: 2026-03-09 20:20:23.967545057 +0000 UTC m=+0.073679485 container init 87d27b7ebb480d0ad1c0b10d4054705139bb2ec732bdb61c2ab6e97f528124a6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-rgw-foo-a, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS) 2026-03-09T20:20:24.411 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 09 20:20:23 vm05 podman[86156]: 2026-03-09 20:20:23.973567861 +0000 UTC m=+0.079702279 container start 87d27b7ebb480d0ad1c0b10d4054705139bb2ec732bdb61c2ab6e97f528124a6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-rgw-foo-a, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, io.buildah.version=1.41.3) 2026-03-09T20:20:24.411 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 09 20:20:23 vm05 bash[86156]: 87d27b7ebb480d0ad1c0b10d4054705139bb2ec732bdb61c2ab6e97f528124a6 2026-03-09T20:20:24.411 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 09 20:20:23 vm05 podman[86156]: 2026-03-09 20:20:23.903865801 +0000 UTC m=+0.010000219 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:20:24.411 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 09 20:20:23 vm05 systemd[1]: Started Ceph rgw.foo.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:20:24.421 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='client.24349 v1:192.168.123.109:0/2846762786' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm05=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: Saving service rgw.foo.a spec with placement vm05=foo.a;count:1 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: Deploying daemon rgw.foo.a on vm05 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: pgmap v101: 1 pgs: 1 peering; 0 B data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.109:0/2448962654' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='client.24338 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:24.444 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:24 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: Saving service rgw.foo.a spec with placement vm05=foo.a;count:1 2026-03-09T20:20:25.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: from='client.24338 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T20:20:25.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: osdmap e49: 8 total, 8 up, 8 in 2026-03-09T20:20:25.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4076623339' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T20:20:25.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: from='client.24347 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T20:20:25.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:25.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/1630879539' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: from='client.24364 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: Saving service rgw.foo.a spec with placement vm05=foo.a;count:1 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: from='client.24338 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: osdmap e49: 8 total, 8 up, 8 in 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4076623339' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: from='client.24347 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/1630879539' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: from='client.24364 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:25 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: Saving service rgw.foo.a spec with placement vm05=foo.a;count:1 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: from='client.24338 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: osdmap e49: 8 total, 8 up, 8 in 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4076623339' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: from='client.24347 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.109:0/1630879539' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: from='client.24364 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:25 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:26.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:26 vm05 ceph-mon[61345]: Checking dashboard <-> RGW credentials 2026-03-09T20:20:26.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:26 vm05 ceph-mon[61345]: from='client.24347 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T20:20:26.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:26 vm05 ceph-mon[61345]: from='client.24364 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T20:20:26.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:26 vm05 ceph-mon[61345]: osdmap e50: 8 total, 8 up, 8 in 2026-03-09T20:20:26.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:26 vm05 ceph-mon[61345]: pgmap v104: 36 pgs: 4 active+clean, 5 creating+peering, 26 unknown, 1 peering; 0 B data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:26.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:26 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:26 vm05 ceph-mon[51870]: Checking dashboard <-> RGW credentials 2026-03-09T20:20:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:26 vm05 ceph-mon[51870]: from='client.24347 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T20:20:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:26 vm05 ceph-mon[51870]: from='client.24364 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T20:20:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:26 vm05 ceph-mon[51870]: osdmap e50: 8 total, 8 up, 8 in 2026-03-09T20:20:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:26 vm05 ceph-mon[51870]: pgmap v104: 36 pgs: 4 active+clean, 5 creating+peering, 26 unknown, 1 peering; 0 B data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:26 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:26 vm09 ceph-mon[54524]: Checking dashboard <-> RGW credentials 2026-03-09T20:20:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:26 vm09 ceph-mon[54524]: from='client.24347 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T20:20:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:26 vm09 ceph-mon[54524]: from='client.24364 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T20:20:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:26 vm09 ceph-mon[54524]: osdmap e50: 8 total, 8 up, 8 in 2026-03-09T20:20:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:26 vm09 ceph-mon[54524]: pgmap v104: 36 pgs: 4 active+clean, 5 creating+peering, 26 unknown, 1 peering; 0 B data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:26 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:27.249 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.109 --placement '1;vm09=iscsi.a' 2026-03-09T20:20:27.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:27 vm05 ceph-mon[61345]: osdmap e51: 8 total, 8 up, 8 in 2026-03-09T20:20:27.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T20:20:27.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T20:20:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:27 vm05 ceph-mon[51870]: osdmap e51: 8 total, 8 up, 8 in 2026-03-09T20:20:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T20:20:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T20:20:27.444 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:20:27.469 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:27 vm09 ceph-mon[54524]: osdmap e51: 8 total, 8 up, 8 in 2026-03-09T20:20:27.469 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T20:20:27.469 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T20:20:27.579 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.578+0000 7f9c3f7fe640 1 -- 192.168.123.109:0/4123831426 <== mon.2 v1:192.168.123.105:6790/0 5 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f9c380045d0 con 0x7f9c48108da0 2026-03-09T20:20:27.579 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.578+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/4123831426 >> v1:192.168.123.105:6790/0 conn(0x7f9c48108da0 legacy=0x7f9c4810b1f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:27.579 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.578+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/4123831426 shutdown_connections 2026-03-09T20:20:27.579 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.579+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/4123831426 >> 192.168.123.109:0/4123831426 conn(0x7f9c48100120 msgr2=0x7f9c48102540 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:27.579 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.579+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/4123831426 shutdown_connections 2026-03-09T20:20:27.579 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.579+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/4123831426 wait complete. 2026-03-09T20:20:27.579 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.579+0000 7f9c4ebd3640 1 Processor -- start 2026-03-09T20:20:27.579 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.579+0000 7f9c4ebd3640 1 -- start start 2026-03-09T20:20:27.580 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.579+0000 7f9c4ebd3640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9c4819c850 con 0x7f9c4810cab0 2026-03-09T20:20:27.580 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.579+0000 7f9c4ebd3640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9c481a8000 con 0x7f9c48108da0 2026-03-09T20:20:27.580 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.579+0000 7f9c4ebd3640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9c481a91e0 con 0x7f9c48104970 2026-03-09T20:20:27.580 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.580+0000 7f9c4c948640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f9c48104970 0x7f9c4819bcd0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.109:53624/0 (socket says 192.168.123.109:53624) 2026-03-09T20:20:27.580 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.580+0000 7f9c4c948640 1 -- 192.168.123.109:0/2739954515 learned_addr learned my addr 192.168.123.109:0/2739954515 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:20:27.580 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.580+0000 7f9c3ffff640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f9c48108da0 0x7f9c481a21a0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.109:55702/0 (socket says 192.168.123.109:55702) 2026-03-09T20:20:27.580 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.580+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3952382529 0 0) 0x7f9c481a8000 con 0x7f9c48108da0 2026-03-09T20:20:27.581 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.580+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9c20003620 con 0x7f9c48108da0 2026-03-09T20:20:27.581 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.580+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1708994341 0 0) 0x7f9c481a91e0 con 0x7f9c48104970 2026-03-09T20:20:27.581 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.580+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9c481a8000 con 0x7f9c48104970 2026-03-09T20:20:27.581 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.580+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1239515536 0 0) 0x7f9c4819c850 con 0x7f9c4810cab0 2026-03-09T20:20:27.581 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.580+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9c481a91e0 con 0x7f9c4810cab0 2026-03-09T20:20:27.581 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.581+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 476845661 0 0) 0x7f9c481a8000 con 0x7f9c48104970 2026-03-09T20:20:27.581 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.581+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f9c4819c850 con 0x7f9c48104970 2026-03-09T20:20:27.581 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.581+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f9c30003100 con 0x7f9c48104970 2026-03-09T20:20:27.581 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.581+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1741974748 0 0) 0x7f9c481a91e0 con 0x7f9c4810cab0 2026-03-09T20:20:27.581 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.581+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f9c481a8000 con 0x7f9c4810cab0 2026-03-09T20:20:27.582 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.581+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f9c44003310 con 0x7f9c4810cab0 2026-03-09T20:20:27.582 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.581+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 68867549 0 0) 0x7f9c4819c850 con 0x7f9c48104970 2026-03-09T20:20:27.582 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.581+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 >> v1:192.168.123.109:6789/0 conn(0x7f9c48108da0 legacy=0x7f9c481a21a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:27.582 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.581+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 >> v1:192.168.123.105:6789/0 conn(0x7f9c4810cab0 legacy=0x7f9c481a58d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:27.582 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.581+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9c481aa3c0 con 0x7f9c48104970 2026-03-09T20:20:27.582 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.581+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/2739954515 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f9c481a8230 con 0x7f9c48104970 2026-03-09T20:20:27.582 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.581+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/2739954515 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f9c481a8810 con 0x7f9c48104970 2026-03-09T20:20:27.583 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.582+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f9c30003b60 con 0x7f9c48104970 2026-03-09T20:20:27.583 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.582+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f9c30005c60 con 0x7f9c48104970 2026-03-09T20:20:27.583 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.583+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f9c30006f10 con 0x7f9c48104970 2026-03-09T20:20:27.583 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.583+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/2739954515 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9c10005180 con 0x7f9c48104970 2026-03-09T20:20:27.584 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.583+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(52..52 src has 1..52) ==== 5199+0+0 (unknown 1226651949 0 0) 0x7f9c30094ad0 con 0x7f9c48104970 2026-03-09T20:20:27.587 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.586+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f9c3005ede0 con 0x7f9c48104970 2026-03-09T20:20:27.689 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.689+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/2739954515 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}) -- 0x7f9c10002cc0 con 0x7f9c20078090 2026-03-09T20:20:27.698 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.698+0000 7f9c3dffb640 1 -- 192.168.123.109:0/2739954515 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+35 (unknown 0 0 803663096) 0x7f9c10002cc0 con 0x7f9c20078090 2026-03-09T20:20:27.698 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled iscsi.datapool update... 2026-03-09T20:20:27.701 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.700+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/2739954515 >> v1:192.168.123.105:6800/3290461294 conn(0x7f9c20078090 legacy=0x7f9c2007a550 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:27.701 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.701+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/2739954515 >> v1:192.168.123.105:6790/0 conn(0x7f9c48104970 legacy=0x7f9c4819bcd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:27.701 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.701+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/2739954515 shutdown_connections 2026-03-09T20:20:27.701 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.701+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/2739954515 >> 192.168.123.109:0/2739954515 conn(0x7f9c48100120 msgr2=0x7f9c4810f6a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:27.701 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.701+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/2739954515 shutdown_connections 2026-03-09T20:20:27.701 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:27.701+0000 7f9c4ebd3640 1 -- 192.168.123.109:0/2739954515 wait complete. 2026-03-09T20:20:27.848 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-09T20:20:27.848 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:20:27.848 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T20:20:27.877 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T20:20:27.877 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T20:20:27.904 DEBUG:teuthology.orchestra.run.vm09:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@iscsi.iscsi.a.service 2026-03-09T20:20:27.946 INFO:tasks.cephadm:Adding prometheus.a on vm09 2026-03-09T20:20:27.946 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch apply prometheus '1;vm09=a' 2026-03-09T20:20:28.121 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T20:20:28.121 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[61345]: osdmap e52: 8 total, 8 up, 8 in 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[61345]: pgmap v107: 68 pgs: 13 active+clean, 6 creating+peering, 49 unknown; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[61345]: osdmap e53: 8 total, 8 up, 8 in 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[51870]: osdmap e52: 8 total, 8 up, 8 in 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[51870]: pgmap v107: 68 pgs: 13 active+clean, 6 creating+peering, 49 unknown; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[51870]: osdmap e53: 8 total, 8 up, 8 in 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T20:20:28.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T20:20:28.185 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:20:28.318 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.317+0000 7f11172fc640 1 -- 192.168.123.109:0/2482398839 >> v1:192.168.123.109:6789/0 conn(0x7f1110104990 legacy=0x7f1110104d90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:28.319 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.318+0000 7f11172fc640 1 -- 192.168.123.109:0/2482398839 shutdown_connections 2026-03-09T20:20:28.319 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.318+0000 7f11172fc640 1 -- 192.168.123.109:0/2482398839 >> 192.168.123.109:0/2482398839 conn(0x7f1110100120 msgr2=0x7f1110102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:28.319 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.318+0000 7f11172fc640 1 -- 192.168.123.109:0/2482398839 shutdown_connections 2026-03-09T20:20:28.319 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.318+0000 7f11172fc640 1 -- 192.168.123.109:0/2482398839 wait complete. 2026-03-09T20:20:28.319 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.319+0000 7f11172fc640 1 Processor -- start 2026-03-09T20:20:28.319 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.319+0000 7f11172fc640 1 -- start start 2026-03-09T20:20:28.320 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.319+0000 7f11172fc640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f1110078010 con 0x7f1110104990 2026-03-09T20:20:28.320 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.319+0000 7f11172fc640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f11100781e0 con 0x7f1110108dc0 2026-03-09T20:20:28.320 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.319+0000 7f11172fc640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f11100783b0 con 0x7f111010cad0 2026-03-09T20:20:28.320 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.319+0000 7f1114870640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f1110108dc0 0x7f1110077900 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.109:55718/0 (socket says 192.168.123.109:55718) 2026-03-09T20:20:28.320 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.319+0000 7f1114870640 1 -- 192.168.123.109:0/2250478627 learned_addr learned my addr 192.168.123.109:0/2250478627 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:20:28.320 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3657637496 0 0) 0x7f11100781e0 con 0x7f1110108dc0 2026-03-09T20:20:28.320 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f10e8003620 con 0x7f1110108dc0 2026-03-09T20:20:28.320 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1206538490 0 0) 0x7f1110078010 con 0x7f1110104990 2026-03-09T20:20:28.320 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f11100781e0 con 0x7f1110104990 2026-03-09T20:20:28.320 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1351500471 0 0) 0x7f11100783b0 con 0x7f111010cad0 2026-03-09T20:20:28.321 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f1110078010 con 0x7f111010cad0 2026-03-09T20:20:28.321 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 882833383 0 0) 0x7f10e8003620 con 0x7f1110108dc0 2026-03-09T20:20:28.321 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f11100783b0 con 0x7f1110108dc0 2026-03-09T20:20:28.321 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f1100004150 con 0x7f1110108dc0 2026-03-09T20:20:28.321 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1262050190 0 0) 0x7f1110078010 con 0x7f111010cad0 2026-03-09T20:20:28.321 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f10e8003620 con 0x7f111010cad0 2026-03-09T20:20:28.321 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f110c0032f0 con 0x7f111010cad0 2026-03-09T20:20:28.321 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 997146268 0 0) 0x7f11100783b0 con 0x7f1110108dc0 2026-03-09T20:20:28.321 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 >> v1:192.168.123.105:6790/0 conn(0x7f111010cad0 legacy=0x7f11101aa450 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:28.321 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.320+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 >> v1:192.168.123.105:6789/0 conn(0x7f1110104990 legacy=0x7f111007adc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:28.321 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.321+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f11101aeba0 con 0x7f1110108dc0 2026-03-09T20:20:28.321 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.321+0000 7f11172fc640 1 -- 192.168.123.109:0/2250478627 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f11101abb70 con 0x7f1110108dc0 2026-03-09T20:20:28.322 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.321+0000 7f11172fc640 1 -- 192.168.123.109:0/2250478627 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f11101ac1a0 con 0x7f1110108dc0 2026-03-09T20:20:28.323 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.322+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f1100002b30 con 0x7f1110108dc0 2026-03-09T20:20:28.323 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.322+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f1100005c50 con 0x7f1110108dc0 2026-03-09T20:20:28.323 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.322+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f110001e5a0 con 0x7f1110108dc0 2026-03-09T20:20:28.323 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.323+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(53..53 src has 1..53) ==== 5529+0+0 (unknown 2657834890 0 0) 0x7f1100095b60 con 0x7f1110108dc0 2026-03-09T20:20:28.323 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.323+0000 7f11172fc640 1 -- 192.168.123.109:0/2250478627 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f10d8005180 con 0x7f1110108dc0 2026-03-09T20:20:28.326 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.326+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f110005ef20 con 0x7f1110108dc0 2026-03-09T20:20:28.426 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:28 vm09 ceph-mon[54524]: osdmap e52: 8 total, 8 up, 8 in 2026-03-09T20:20:28.426 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:28 vm09 ceph-mon[54524]: pgmap v107: 68 pgs: 13 active+clean, 6 creating+peering, 49 unknown; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:28.426 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:28 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:28.426 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:28 vm09 ceph-mon[54524]: osdmap e53: 8 total, 8 up, 8 in 2026-03-09T20:20:28.426 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T20:20:28.427 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T20:20:28.427 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.425+0000 7f11172fc640 1 -- 192.168.123.109:0/2250478627 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}) -- 0x7f10d8002bf0 con 0x7f10e80782f0 2026-03-09T20:20:28.443 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.442+0000 7f11067fc640 1 -- 192.168.123.109:0/2250478627 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+31 (unknown 0 0 1342662408) 0x7f10d8002bf0 con 0x7f10e80782f0 2026-03-09T20:20:28.443 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled prometheus update... 2026-03-09T20:20:28.445 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.444+0000 7f11172fc640 1 -- 192.168.123.109:0/2250478627 >> v1:192.168.123.105:6800/3290461294 conn(0x7f10e80782f0 legacy=0x7f10e807a7b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:28.445 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.445+0000 7f11172fc640 1 -- 192.168.123.109:0/2250478627 >> v1:192.168.123.109:6789/0 conn(0x7f1110108dc0 legacy=0x7f1110077900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:28.445 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.445+0000 7f11172fc640 1 -- 192.168.123.109:0/2250478627 shutdown_connections 2026-03-09T20:20:28.445 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.445+0000 7f11172fc640 1 -- 192.168.123.109:0/2250478627 >> 192.168.123.109:0/2250478627 conn(0x7f1110100120 msgr2=0x7f111010b210 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:28.445 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.445+0000 7f11172fc640 1 -- 192.168.123.109:0/2250478627 shutdown_connections 2026-03-09T20:20:28.445 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.445+0000 7f11172fc640 1 -- 192.168.123.109:0/2250478627 wait complete. 2026-03-09T20:20:28.615 DEBUG:teuthology.orchestra.run.vm09:prometheus.a> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@prometheus.a.service 2026-03-09T20:20:28.617 INFO:tasks.cephadm:Adding node-exporter.a on vm05 2026-03-09T20:20:28.617 INFO:tasks.cephadm:Adding node-exporter.b on vm09 2026-03-09T20:20:28.617 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch apply node-exporter '2;vm05=a;vm09=b' 2026-03-09T20:20:28.811 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:20:28.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.927+0000 7fd80f3c4640 1 -- 192.168.123.109:0/2479629985 >> v1:192.168.123.109:6789/0 conn(0x7fd808108dc0 legacy=0x7fd80810b210 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:28.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.928+0000 7fd80f3c4640 1 -- 192.168.123.109:0/2479629985 shutdown_connections 2026-03-09T20:20:28.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.928+0000 7fd80f3c4640 1 -- 192.168.123.109:0/2479629985 >> 192.168.123.109:0/2479629985 conn(0x7fd808100120 msgr2=0x7fd808102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:28.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.928+0000 7fd80f3c4640 1 -- 192.168.123.109:0/2479629985 shutdown_connections 2026-03-09T20:20:28.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.928+0000 7fd80f3c4640 1 -- 192.168.123.109:0/2479629985 wait complete. 2026-03-09T20:20:28.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.928+0000 7fd80f3c4640 1 Processor -- start 2026-03-09T20:20:28.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.928+0000 7fd80f3c4640 1 -- start start 2026-03-09T20:20:28.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.928+0000 7fd80f3c4640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd80819c890 con 0x7fd808104990 2026-03-09T20:20:28.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.928+0000 7fd80f3c4640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd8081a8040 con 0x7fd80810cad0 2026-03-09T20:20:28.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.928+0000 7fd80f3c4640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd8081a9220 con 0x7fd808108dc0 2026-03-09T20:20:28.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.929+0000 7fd80c938640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7fd808108dc0 0x7fd8081a21e0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.109:53676/0 (socket says 192.168.123.109:53676) 2026-03-09T20:20:28.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.929+0000 7fd80c938640 1 -- 192.168.123.109:0/347711747 learned_addr learned my addr 192.168.123.109:0/347711747 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:20:28.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.929+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2787344285 0 0) 0x7fd8081a8040 con 0x7fd80810cad0 2026-03-09T20:20:28.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.929+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd7e0003620 con 0x7fd80810cad0 2026-03-09T20:20:28.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.929+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1762078725 0 0) 0x7fd8081a9220 con 0x7fd808108dc0 2026-03-09T20:20:28.930 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.929+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd8081a8040 con 0x7fd808108dc0 2026-03-09T20:20:28.930 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.929+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3188618892 0 0) 0x7fd7e0003620 con 0x7fd80810cad0 2026-03-09T20:20:28.930 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.929+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd8081a9220 con 0x7fd80810cad0 2026-03-09T20:20:28.930 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.930+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fd804003400 con 0x7fd80810cad0 2026-03-09T20:20:28.930 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.930+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2009589035 0 0) 0x7fd80819c890 con 0x7fd808104990 2026-03-09T20:20:28.930 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.930+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd7e0003620 con 0x7fd808104990 2026-03-09T20:20:28.930 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.930+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1689407308 0 0) 0x7fd8081a8040 con 0x7fd808108dc0 2026-03-09T20:20:28.930 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.930+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd80819c890 con 0x7fd808108dc0 2026-03-09T20:20:28.930 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.930+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2367240406 0 0) 0x7fd8081a9220 con 0x7fd80810cad0 2026-03-09T20:20:28.930 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.930+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 >> v1:192.168.123.105:6790/0 conn(0x7fd808108dc0 legacy=0x7fd8081a21e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:28.931 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.930+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 >> v1:192.168.123.105:6789/0 conn(0x7fd808104990 legacy=0x7fd80819bd10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:28.931 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.930+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd8081aa400 con 0x7fd80810cad0 2026-03-09T20:20:28.931 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.931+0000 7fd80f3c4640 1 -- 192.168.123.109:0/347711747 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fd8081a8270 con 0x7fd80810cad0 2026-03-09T20:20:28.931 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.931+0000 7fd80f3c4640 1 -- 192.168.123.109:0/347711747 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fd8081a8850 con 0x7fd80810cad0 2026-03-09T20:20:28.931 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.931+0000 7fd80f3c4640 1 -- 192.168.123.109:0/347711747 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd808111bc0 con 0x7fd80810cad0 2026-03-09T20:20:28.934 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.934+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fd804003e20 con 0x7fd80810cad0 2026-03-09T20:20:28.934 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.934+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fd804004ca0 con 0x7fd80810cad0 2026-03-09T20:20:28.935 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.934+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7fd80401d5f0 con 0x7fd80810cad0 2026-03-09T20:20:28.935 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.935+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(53..53 src has 1..53) ==== 5529+0+0 (unknown 2657834890 0 0) 0x7fd804093c70 con 0x7fd80810cad0 2026-03-09T20:20:28.935 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:28.935+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fd804094120 con 0x7fd80810cad0 2026-03-09T20:20:29.030 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.030+0000 7fd80f3c4640 1 -- 192.168.123.109:0/347711747 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm05=a;vm09=b", "target": ["mon-mgr", ""]}) -- 0x7fd80810a920 con 0x7fd7e00780f0 2026-03-09T20:20:29.036 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.036+0000 7fd7f67fc640 1 -- 192.168.123.109:0/347711747 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+34 (unknown 0 0 240551134) 0x7fd80810a920 con 0x7fd7e00780f0 2026-03-09T20:20:29.036 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled node-exporter update... 2026-03-09T20:20:29.038 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.038+0000 7fd80f3c4640 1 -- 192.168.123.109:0/347711747 >> v1:192.168.123.105:6800/3290461294 conn(0x7fd7e00780f0 legacy=0x7fd7e007a5b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:29.038 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.038+0000 7fd80f3c4640 1 -- 192.168.123.109:0/347711747 >> v1:192.168.123.109:6789/0 conn(0x7fd80810cad0 legacy=0x7fd8081a5910 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:29.038 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.038+0000 7fd80f3c4640 1 -- 192.168.123.109:0/347711747 shutdown_connections 2026-03-09T20:20:29.038 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.038+0000 7fd80f3c4640 1 -- 192.168.123.109:0/347711747 >> 192.168.123.109:0/347711747 conn(0x7fd808100120 msgr2=0x7fd808109200 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:29.038 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.038+0000 7fd80f3c4640 1 -- 192.168.123.109:0/347711747 shutdown_connections 2026-03-09T20:20:29.038 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.038+0000 7fd80f3c4640 1 -- 192.168.123.109:0/347711747 wait complete. 2026-03-09T20:20:29.199 DEBUG:teuthology.orchestra.run.vm05:node-exporter.a> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@node-exporter.a.service 2026-03-09T20:20:29.201 DEBUG:teuthology.orchestra.run.vm09:node-exporter.b> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@node-exporter.b.service 2026-03-09T20:20:29.202 INFO:tasks.cephadm:Adding alertmanager.a on vm05 2026-03-09T20:20:29.202 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch apply alertmanager '1;vm05=a' 2026-03-09T20:20:29.406 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:20:29.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.534+0000 7ff20bd3d640 1 -- 192.168.123.109:0/2514099465 >> v1:192.168.123.109:6789/0 conn(0x7ff204108dc0 legacy=0x7ff20410b210 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:29.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.535+0000 7ff20bd3d640 1 -- 192.168.123.109:0/2514099465 shutdown_connections 2026-03-09T20:20:29.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.535+0000 7ff20bd3d640 1 -- 192.168.123.109:0/2514099465 >> 192.168.123.109:0/2514099465 conn(0x7ff204100120 msgr2=0x7ff204102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:29.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.535+0000 7ff20bd3d640 1 -- 192.168.123.109:0/2514099465 shutdown_connections 2026-03-09T20:20:29.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.535+0000 7ff20bd3d640 1 -- 192.168.123.109:0/2514099465 wait complete. 2026-03-09T20:20:29.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.535+0000 7ff20bd3d640 1 Processor -- start 2026-03-09T20:20:29.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.535+0000 7ff20bd3d640 1 -- start start 2026-03-09T20:20:29.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.535+0000 7ff20bd3d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff20419ca50 con 0x7ff204108dc0 2026-03-09T20:20:29.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.535+0000 7ff20bd3d640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff2041a8210 con 0x7ff20410cad0 2026-03-09T20:20:29.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.535+0000 7ff20bd3d640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff2041a93f0 con 0x7ff204104990 2026-03-09T20:20:29.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff20a2b3640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7ff20410cad0 0x7ff2041a5ae0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.109:55756/0 (socket says 192.168.123.109:55756) 2026-03-09T20:20:29.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff20a2b3640 1 -- 192.168.123.109:0/31932430 learned_addr learned my addr 192.168.123.109:0/31932430 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:20:29.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 303796214 0 0) 0x7ff2041a8210 con 0x7ff20410cad0 2026-03-09T20:20:29.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff1d4003620 con 0x7ff20410cad0 2026-03-09T20:20:29.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2034976439 0 0) 0x7ff2041a93f0 con 0x7ff204104990 2026-03-09T20:20:29.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff2041a8210 con 0x7ff204104990 2026-03-09T20:20:29.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3850161034 0 0) 0x7ff20419ca50 con 0x7ff204108dc0 2026-03-09T20:20:29.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff2041a93f0 con 0x7ff204108dc0 2026-03-09T20:20:29.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2253445419 0 0) 0x7ff1d4003620 con 0x7ff20410cad0 2026-03-09T20:20:29.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7ff20419ca50 con 0x7ff20410cad0 2026-03-09T20:20:29.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7ff200003540 con 0x7ff20410cad0 2026-03-09T20:20:29.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1289251715 0 0) 0x7ff20419ca50 con 0x7ff20410cad0 2026-03-09T20:20:29.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 >> v1:192.168.123.105:6790/0 conn(0x7ff204104990 legacy=0x7ff20419bed0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:29.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 >> v1:192.168.123.105:6789/0 conn(0x7ff204108dc0 legacy=0x7ff2041a23b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:29.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff2041aa5d0 con 0x7ff20410cad0 2026-03-09T20:20:29.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.536+0000 7ff20bd3d640 1 -- 192.168.123.109:0/31932430 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7ff2041a7260 con 0x7ff20410cad0 2026-03-09T20:20:29.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.537+0000 7ff20bd3d640 1 -- 192.168.123.109:0/31932430 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7ff2041a7840 con 0x7ff20410cad0 2026-03-09T20:20:29.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.537+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7ff200003860 con 0x7ff20410cad0 2026-03-09T20:20:29.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.537+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7ff200006090 con 0x7ff20410cad0 2026-03-09T20:20:29.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.538+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7ff20001e740 con 0x7ff20410cad0 2026-03-09T20:20:29.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.538+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(54..54 src has 1..54) ==== 5540+0+0 (unknown 3997488103 0 0) 0x7ff200095d50 con 0x7ff20410cad0 2026-03-09T20:20:29.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.539+0000 7ff20bd3d640 1 -- 192.168.123.109:0/31932430 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff1cc005180 con 0x7ff20410cad0 2026-03-09T20:20:29.542 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.541+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7ff20005efa0 con 0x7ff20410cad0 2026-03-09T20:20:29.638 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.637+0000 7ff20bd3d640 1 -- 192.168.123.109:0/31932430 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}) -- 0x7ff1cc002bf0 con 0x7ff1d4077d40 2026-03-09T20:20:29.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.643+0000 7ff1f2ffd640 1 -- 192.168.123.109:0/31932430 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+33 (unknown 0 0 1850065467) 0x7ff1cc002bf0 con 0x7ff1d4077d40 2026-03-09T20:20:29.644 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled alertmanager update... 2026-03-09T20:20:29.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.646+0000 7ff20bd3d640 1 -- 192.168.123.109:0/31932430 >> v1:192.168.123.105:6800/3290461294 conn(0x7ff1d4077d40 legacy=0x7ff1d407a200 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:29.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.646+0000 7ff20bd3d640 1 -- 192.168.123.109:0/31932430 >> v1:192.168.123.109:6789/0 conn(0x7ff20410cad0 legacy=0x7ff2041a5ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:29.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.646+0000 7ff20bd3d640 1 -- 192.168.123.109:0/31932430 shutdown_connections 2026-03-09T20:20:29.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.646+0000 7ff20bd3d640 1 -- 192.168.123.109:0/31932430 >> 192.168.123.109:0/31932430 conn(0x7ff204100120 msgr2=0x7ff204109200 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:29.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.646+0000 7ff20bd3d640 1 -- 192.168.123.109:0/31932430 shutdown_connections 2026-03-09T20:20:29.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:29.646+0000 7ff20bd3d640 1 -- 192.168.123.109:0/31932430 wait complete. 2026-03-09T20:20:29.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[61345]: from='client.24398 v1:192.168.123.109:0/2739954515' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:29.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[61345]: Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-09T20:20:29.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[61345]: from='client.24415 v1:192.168.123.109:0/2250478627' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:29.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[61345]: Saving service prometheus spec with placement vm09=a;count:1 2026-03-09T20:20:29.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:29.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:29.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T20:20:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T20:20:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[61345]: osdmap e54: 8 total, 8 up, 8 in 2026-03-09T20:20:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[51870]: from='client.24398 v1:192.168.123.109:0/2739954515' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[51870]: Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-09T20:20:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[51870]: from='client.24415 v1:192.168.123.109:0/2250478627' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[51870]: Saving service prometheus spec with placement vm09=a;count:1 2026-03-09T20:20:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T20:20:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T20:20:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:29 vm05 ceph-mon[51870]: osdmap e54: 8 total, 8 up, 8 in 2026-03-09T20:20:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:29 vm09 ceph-mon[54524]: from='client.24398 v1:192.168.123.109:0/2739954515' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:29 vm09 ceph-mon[54524]: Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-09T20:20:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:29 vm09 ceph-mon[54524]: from='client.24415 v1:192.168.123.109:0/2250478627' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:29 vm09 ceph-mon[54524]: Saving service prometheus spec with placement vm09=a;count:1 2026-03-09T20:20:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:29 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:29 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T20:20:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T20:20:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:29 vm09 ceph-mon[54524]: osdmap e54: 8 total, 8 up, 8 in 2026-03-09T20:20:29.813 DEBUG:teuthology.orchestra.run.vm05:alertmanager.a> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@alertmanager.a.service 2026-03-09T20:20:29.815 INFO:tasks.cephadm:Adding grafana.a on vm09 2026-03-09T20:20:29.815 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph orch apply grafana '1;vm09=a' 2026-03-09T20:20:29.981 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:20:30.125 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.124+0000 7ffa8343c640 1 -- 192.168.123.109:0/3761341013 >> v1:192.168.123.105:6790/0 conn(0x7ffa7c108bc0 legacy=0x7ffa7c10b010 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:30.125 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.125+0000 7ffa8343c640 1 -- 192.168.123.109:0/3761341013 shutdown_connections 2026-03-09T20:20:30.125 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.125+0000 7ffa8343c640 1 -- 192.168.123.109:0/3761341013 >> 192.168.123.109:0/3761341013 conn(0x7ffa7c0fff40 msgr2=0x7ffa7c102360 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:30.125 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.125+0000 7ffa8343c640 1 -- 192.168.123.109:0/3761341013 shutdown_connections 2026-03-09T20:20:30.125 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.125+0000 7ffa8343c640 1 -- 192.168.123.109:0/3761341013 wait complete. 2026-03-09T20:20:30.126 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.125+0000 7ffa8343c640 1 Processor -- start 2026-03-09T20:20:30.126 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.126+0000 7ffa8343c640 1 -- start start 2026-03-09T20:20:30.126 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.126+0000 7ffa8343c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ffa7c19c730 con 0x7ffa7c108bc0 2026-03-09T20:20:30.126 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.126+0000 7ffa8343c640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ffa7c1a7ef0 con 0x7ffa7c104790 2026-03-09T20:20:30.126 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.126+0000 7ffa8343c640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ffa7c1a90d0 con 0x7ffa7c10c8d0 2026-03-09T20:20:30.127 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.126+0000 7ffa8243a640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7ffa7c104790 0x7ffa7c19bbb0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.109:55778/0 (socket says 192.168.123.109:55778) 2026-03-09T20:20:30.127 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.126+0000 7ffa8243a640 1 -- 192.168.123.109:0/3702305646 learned_addr learned my addr 192.168.123.109:0/3702305646 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:20:30.127 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3580869886 0 0) 0x7ffa7c1a7ef0 con 0x7ffa7c104790 2026-03-09T20:20:30.127 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ffa50003620 con 0x7ffa7c104790 2026-03-09T20:20:30.128 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4255565917 0 0) 0x7ffa7c1a90d0 con 0x7ffa7c10c8d0 2026-03-09T20:20:30.128 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ffa7c1a7ef0 con 0x7ffa7c10c8d0 2026-03-09T20:20:30.128 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4091308078 0 0) 0x7ffa50003620 con 0x7ffa7c104790 2026-03-09T20:20:30.128 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7ffa7c1a90d0 con 0x7ffa7c104790 2026-03-09T20:20:30.128 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1806816839 0 0) 0x7ffa7c19c730 con 0x7ffa7c108bc0 2026-03-09T20:20:30.128 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ffa50003620 con 0x7ffa7c108bc0 2026-03-09T20:20:30.128 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7ffa70003160 con 0x7ffa7c104790 2026-03-09T20:20:30.129 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2689451064 0 0) 0x7ffa7c1a7ef0 con 0x7ffa7c10c8d0 2026-03-09T20:20:30.129 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7ffa7c19c730 con 0x7ffa7c10c8d0 2026-03-09T20:20:30.129 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2487584445 0 0) 0x7ffa7c1a90d0 con 0x7ffa7c104790 2026-03-09T20:20:30.129 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 >> v1:192.168.123.105:6790/0 conn(0x7ffa7c10c8d0 legacy=0x7ffa7c1a57c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:30.129 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 >> v1:192.168.123.105:6789/0 conn(0x7ffa7c108bc0 legacy=0x7ffa7c1a2090 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:30.129 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.127+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ffa7c1aa2b0 con 0x7ffa7c104790 2026-03-09T20:20:30.129 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.128+0000 7ffa8343c640 1 -- 192.168.123.109:0/3702305646 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7ffa7c1a92a0 con 0x7ffa7c104790 2026-03-09T20:20:30.129 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.128+0000 7ffa8343c640 1 -- 192.168.123.109:0/3702305646 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7ffa7c1a9830 con 0x7ffa7c104790 2026-03-09T20:20:30.130 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.129+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7ffa700034a0 con 0x7ffa7c104790 2026-03-09T20:20:30.131 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.129+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7ffa70005c60 con 0x7ffa7c104790 2026-03-09T20:20:30.131 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.129+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7ffa70006f10 con 0x7ffa7c104790 2026-03-09T20:20:30.131 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.129+0000 7ffa8343c640 1 -- 192.168.123.109:0/3702305646 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ffa4c005180 con 0x7ffa7c104790 2026-03-09T20:20:30.131 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.130+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(55..55 src has 1..55) ==== 5895+0+0 (unknown 1559733976 0 0) 0x7ffa70095d00 con 0x7ffa7c104790 2026-03-09T20:20:30.132 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.132+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7ffa7005ee70 con 0x7ffa7c104790 2026-03-09T20:20:30.231 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.231+0000 7ffa8343c640 1 -- 192.168.123.109:0/3702305646 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}) -- 0x7ffa4c002bf0 con 0x7ffa500783a0 2026-03-09T20:20:30.240 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.239+0000 7ffa6b7fe640 1 -- 192.168.123.109:0/3702305646 <== mgr.14150 v1:192.168.123.105:6800/3290461294 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+28 (unknown 0 0 664801700) 0x7ffa4c002bf0 con 0x7ffa500783a0 2026-03-09T20:20:30.240 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled grafana update... 2026-03-09T20:20:30.242 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.242+0000 7ffa8343c640 1 -- 192.168.123.109:0/3702305646 >> v1:192.168.123.105:6800/3290461294 conn(0x7ffa500783a0 legacy=0x7ffa5007a860 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:30.243 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.242+0000 7ffa8343c640 1 -- 192.168.123.109:0/3702305646 >> v1:192.168.123.109:6789/0 conn(0x7ffa7c104790 legacy=0x7ffa7c19bbb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:30.243 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.242+0000 7ffa8343c640 1 -- 192.168.123.109:0/3702305646 shutdown_connections 2026-03-09T20:20:30.243 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.242+0000 7ffa8343c640 1 -- 192.168.123.109:0/3702305646 >> 192.168.123.109:0/3702305646 conn(0x7ffa7c0fff40 msgr2=0x7ffa7c10f4c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:30.243 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.243+0000 7ffa8343c640 1 -- 192.168.123.109:0/3702305646 shutdown_connections 2026-03-09T20:20:30.243 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:30.243+0000 7ffa8343c640 1 -- 192.168.123.109:0/3702305646 wait complete. 2026-03-09T20:20:30.419 DEBUG:teuthology.orchestra.run.vm09:grafana.a> sudo journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@grafana.a.service 2026-03-09T20:20:30.421 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T20:20:30.421 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T20:20:30.610 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[51870]: from='client.24421 v1:192.168.123.109:0/347711747' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm05=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[51870]: Saving service node-exporter spec with placement vm05=a;vm09=b;count:2 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[51870]: pgmap v110: 100 pgs: 53 active+clean, 8 creating+peering, 39 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[51870]: osdmap e55: 8 total, 8 up, 8 in 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[61345]: from='client.24421 v1:192.168.123.109:0/347711747' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm05=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[61345]: Saving service node-exporter spec with placement vm05=a;vm09=b;count:2 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[61345]: pgmap v110: 100 pgs: 53 active+clean, 8 creating+peering, 39 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[61345]: osdmap e55: 8 total, 8 up, 8 in 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T20:20:30.715 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:30 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:30 vm09 ceph-mon[54524]: from='client.24421 v1:192.168.123.109:0/347711747' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm05=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:30 vm09 ceph-mon[54524]: Saving service node-exporter spec with placement vm05=a;vm09=b;count:2 2026-03-09T20:20:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:30 vm09 ceph-mon[54524]: pgmap v110: 100 pgs: 53 active+clean, 8 creating+peering, 39 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:30 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:30 vm09 ceph-mon[54524]: osdmap e55: 8 total, 8 up, 8 in 2026-03-09T20:20:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T20:20:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T20:20:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:30 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:30.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.783+0000 7ffb101a9640 1 -- 192.168.123.105:0/3709602305 >> v1:192.168.123.105:6789/0 conn(0x7ffb08108dc0 legacy=0x7ffb0810b210 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:30.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.784+0000 7ffb101a9640 1 -- 192.168.123.105:0/3709602305 shutdown_connections 2026-03-09T20:20:30.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.784+0000 7ffb101a9640 1 -- 192.168.123.105:0/3709602305 >> 192.168.123.105:0/3709602305 conn(0x7ffb08100120 msgr2=0x7ffb08102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:30.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.784+0000 7ffb101a9640 1 -- 192.168.123.105:0/3709602305 shutdown_connections 2026-03-09T20:20:30.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.784+0000 7ffb101a9640 1 -- 192.168.123.105:0/3709602305 wait complete. 2026-03-09T20:20:30.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.784+0000 7ffb101a9640 1 Processor -- start 2026-03-09T20:20:30.784 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.784+0000 7ffb101a9640 1 -- start start 2026-03-09T20:20:30.785 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.785+0000 7ffb101a9640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ffb0819c950 con 0x7ffb0810cad0 2026-03-09T20:20:30.785 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.785+0000 7ffb101a9640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ffb081a8110 con 0x7ffb08104990 2026-03-09T20:20:30.786 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.785+0000 7ffb101a9640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ffb081a92f0 con 0x7ffb08108dc0 2026-03-09T20:20:30.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.785+0000 7ffb0d71d640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7ffb08108dc0 0x7ffb081a22b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:47338/0 (socket says 192.168.123.105:47338) 2026-03-09T20:20:30.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.785+0000 7ffb0d71d640 1 -- 192.168.123.105:0/1852294051 learned_addr learned my addr 192.168.123.105:0/1852294051 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:30.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.786+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 402804502 0 0) 0x7ffb081a92f0 con 0x7ffb08108dc0 2026-03-09T20:20:30.790 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.786+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ffadc003620 con 0x7ffb08108dc0 2026-03-09T20:20:30.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.786+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1852177755 0 0) 0x7ffadc003620 con 0x7ffb08108dc0 2026-03-09T20:20:30.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.786+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7ffb081a92f0 con 0x7ffb08108dc0 2026-03-09T20:20:30.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.786+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7ffaf8003200 con 0x7ffb08108dc0 2026-03-09T20:20:30.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.786+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1188288926 0 0) 0x7ffb081a92f0 con 0x7ffb08108dc0 2026-03-09T20:20:30.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.786+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 >> v1:192.168.123.109:6789/0 conn(0x7ffb08104990 legacy=0x7ffb0819bdd0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T20:20:30.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.786+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 >> v1:192.168.123.105:6789/0 conn(0x7ffb0810cad0 legacy=0x7ffb081a59e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:30.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.786+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ffb081aa4d0 con 0x7ffb08108dc0 2026-03-09T20:20:30.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.786+0000 7ffb101a9640 1 -- 192.168.123.105:0/1852294051 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7ffb081a8340 con 0x7ffb08108dc0 2026-03-09T20:20:30.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.786+0000 7ffb101a9640 1 -- 192.168.123.105:0/1852294051 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7ffb081a8920 con 0x7ffb08108dc0 2026-03-09T20:20:30.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.786+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7ffaf8004060 con 0x7ffb08108dc0 2026-03-09T20:20:30.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.786+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7ffaf80050c0 con 0x7ffb08108dc0 2026-03-09T20:20:30.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.787+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7ffaf80052e0 con 0x7ffb08108dc0 2026-03-09T20:20:30.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.788+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(55..55 src has 1..55) ==== 5895+0+0 (unknown 1559733976 0 0) 0x7ffaf80953f0 con 0x7ffb08108dc0 2026-03-09T20:20:30.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.788+0000 7ffb101a9640 1 -- 192.168.123.105:0/1852294051 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ffad0005180 con 0x7ffb08108dc0 2026-03-09T20:20:30.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.791+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7ffaf805f300 con 0x7ffb08108dc0 2026-03-09T20:20:30.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.934+0000 7ffb101a9640 1 -- 192.168.123.105:0/1852294051 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) -- 0x7ffad0005470 con 0x7ffb08108dc0 2026-03-09T20:20:30.940 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.941+0000 7ffaf6ffd640 1 -- 192.168.123.105:0/1852294051 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v15) ==== 170+0+59 (unknown 326183931 0 2556041065) 0x7ffaf8062fb0 con 0x7ffb08108dc0 2026-03-09T20:20:30.940 INFO:teuthology.orchestra.run.vm05.stdout:[client.0] 2026-03-09T20:20:30.940 INFO:teuthology.orchestra.run.vm05.stdout: key = AQCOK69pwXvBNxAATJEuYSjaHq0ny+21KaCCtg== 2026-03-09T20:20:30.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.943+0000 7ffb101a9640 1 -- 192.168.123.105:0/1852294051 >> v1:192.168.123.105:6800/3290461294 conn(0x7ffadc0894f0 legacy=0x7ffadc08b9b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:30.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.943+0000 7ffb101a9640 1 -- 192.168.123.105:0/1852294051 >> v1:192.168.123.105:6790/0 conn(0x7ffb08108dc0 legacy=0x7ffb081a22b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:30.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.944+0000 7ffb101a9640 1 -- 192.168.123.105:0/1852294051 shutdown_connections 2026-03-09T20:20:30.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.944+0000 7ffb101a9640 1 -- 192.168.123.105:0/1852294051 >> 192.168.123.105:0/1852294051 conn(0x7ffb08100120 msgr2=0x7ffb081091e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:30.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.944+0000 7ffb101a9640 1 -- 192.168.123.105:0/1852294051 shutdown_connections 2026-03-09T20:20:30.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:30.944+0000 7ffb101a9640 1 -- 192.168.123.105:0/1852294051 wait complete. 2026-03-09T20:20:31.132 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:20:31.132 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T20:20:31.132 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T20:20:31.168 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T20:20:31.347 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.b/config 2026-03-09T20:20:31.459 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:31 vm09 ceph-mon[54524]: from='client.24427 v1:192.168.123.109:0/31932430' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:31.459 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:31 vm09 ceph-mon[54524]: Saving service alertmanager spec with placement vm05=a;count:1 2026-03-09T20:20:31.459 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:31 vm09 ceph-mon[54524]: from='client.24433 v1:192.168.123.109:0/3702305646' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:31.459 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:31 vm09 ceph-mon[54524]: Saving service grafana spec with placement vm09=a;count:1 2026-03-09T20:20:31.459 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1852294051' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:20:31.459 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:31 vm09 ceph-mon[54524]: from='client.24422 ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:20:31.459 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:31 vm09 ceph-mon[54524]: from='client.24422 ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T20:20:31.459 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T20:20:31.459 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T20:20:31.459 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:31 vm09 ceph-mon[54524]: osdmap e56: 8 total, 8 up, 8 in 2026-03-09T20:20:31.459 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T20:20:31.459 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T20:20:31.480 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.479+0000 7fa2dfffe640 1 -- 192.168.123.109:0/2002774934 >> v1:192.168.123.105:6789/0 conn(0x7fa2d8108dc0 legacy=0x7fa2d810b210 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:31.480 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.480+0000 7fa2dfffe640 1 -- 192.168.123.109:0/2002774934 shutdown_connections 2026-03-09T20:20:31.480 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.480+0000 7fa2dfffe640 1 -- 192.168.123.109:0/2002774934 >> 192.168.123.109:0/2002774934 conn(0x7fa2d8100120 msgr2=0x7fa2d8102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:31.480 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.480+0000 7fa2dfffe640 1 -- 192.168.123.109:0/2002774934 shutdown_connections 2026-03-09T20:20:31.481 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.480+0000 7fa2dfffe640 1 -- 192.168.123.109:0/2002774934 wait complete. 2026-03-09T20:20:31.481 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.481+0000 7fa2dfffe640 1 Processor -- start 2026-03-09T20:20:31.481 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.481+0000 7fa2dfffe640 1 -- start start 2026-03-09T20:20:31.482 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.481+0000 7fa2dfffe640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa2d819ca70 con 0x7fa2d8104990 2026-03-09T20:20:31.482 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.481+0000 7fa2dfffe640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa2d81a8230 con 0x7fa2d810cad0 2026-03-09T20:20:31.482 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.481+0000 7fa2dfffe640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa2d81a9410 con 0x7fa2d8108dc0 2026-03-09T20:20:31.482 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.482+0000 7fa2dd572640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7fa2d8108dc0 0x7fa2d81a23d0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.109:56702/0 (socket says 192.168.123.109:56702) 2026-03-09T20:20:31.482 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.482+0000 7fa2dd572640 1 -- 192.168.123.109:0/3721918138 learned_addr learned my addr 192.168.123.109:0/3721918138 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-09T20:20:31.482 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.482+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1047788066 0 0) 0x7fa2d81a8230 con 0x7fa2d810cad0 2026-03-09T20:20:31.483 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.482+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa2b4003620 con 0x7fa2d810cad0 2026-03-09T20:20:31.483 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.482+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2991469433 0 0) 0x7fa2d81a9410 con 0x7fa2d8108dc0 2026-03-09T20:20:31.483 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.482+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa2d81a8230 con 0x7fa2d8108dc0 2026-03-09T20:20:31.483 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.482+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2544563045 0 0) 0x7fa2b4003620 con 0x7fa2d810cad0 2026-03-09T20:20:31.483 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.482+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fa2d81a9410 con 0x7fa2d810cad0 2026-03-09T20:20:31.483 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.482+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fa2d4003440 con 0x7fa2d810cad0 2026-03-09T20:20:31.483 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.482+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1307076383 0 0) 0x7fa2d81a8230 con 0x7fa2d8108dc0 2026-03-09T20:20:31.483 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.482+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fa2b4003620 con 0x7fa2d8108dc0 2026-03-09T20:20:31.483 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.482+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fa2cc002c90 con 0x7fa2d8108dc0 2026-03-09T20:20:31.483 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.483+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2701615730 0 0) 0x7fa2d81a9410 con 0x7fa2d810cad0 2026-03-09T20:20:31.483 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.483+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 >> v1:192.168.123.105:6790/0 conn(0x7fa2d8108dc0 legacy=0x7fa2d81a23d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:31.483 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.483+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 >> v1:192.168.123.105:6789/0 conn(0x7fa2d8104990 legacy=0x7fa2d819bef0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:31.483 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.483+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa2d81aa5f0 con 0x7fa2d810cad0 2026-03-09T20:20:31.484 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.483+0000 7fa2dfffe640 1 -- 192.168.123.109:0/3721918138 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fa2d81a8460 con 0x7fa2d810cad0 2026-03-09T20:20:31.484 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.483+0000 7fa2dfffe640 1 -- 192.168.123.109:0/3721918138 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fa2d81a89f0 con 0x7fa2d810cad0 2026-03-09T20:20:31.484 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.483+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fa2d40038b0 con 0x7fa2d810cad0 2026-03-09T20:20:31.484 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.483+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fa2d4004e00 con 0x7fa2d810cad0 2026-03-09T20:20:31.487 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.484+0000 7fa2dfffe640 1 -- 192.168.123.109:0/3721918138 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa2a0005180 con 0x7fa2d810cad0 2026-03-09T20:20:31.487 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.485+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7fa2d4004220 con 0x7fa2d810cad0 2026-03-09T20:20:31.487 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.485+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(56..56 src has 1..56) ==== 5906+0+0 (unknown 257815932 0 0) 0x7fa2d4094060 con 0x7fa2d810cad0 2026-03-09T20:20:31.488 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.488+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fa2d405d140 con 0x7fa2d810cad0 2026-03-09T20:20:31.622 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.621+0000 7fa2dfffe640 1 -- 192.168.123.109:0/3721918138 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) -- 0x7fa2a0005470 con 0x7fa2d810cad0 2026-03-09T20:20:31.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.627+0000 7fa2c6ffd640 1 -- 192.168.123.109:0/3721918138 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v16) ==== 170+0+59 (unknown 3526050746 0 3796997267) 0x7fa2d4060df0 con 0x7fa2d810cad0 2026-03-09T20:20:31.627 INFO:teuthology.orchestra.run.vm09.stdout:[client.1] 2026-03-09T20:20:31.627 INFO:teuthology.orchestra.run.vm09.stdout: key = AQCPK69pr9Y4JRAA4NthUhsmT7iburh767KzRg== 2026-03-09T20:20:31.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.629+0000 7fa2dfffe640 1 -- 192.168.123.109:0/3721918138 >> v1:192.168.123.105:6800/3290461294 conn(0x7fa2b40781d0 legacy=0x7fa2b407a690 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:31.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.629+0000 7fa2dfffe640 1 -- 192.168.123.109:0/3721918138 >> v1:192.168.123.109:6789/0 conn(0x7fa2d810cad0 legacy=0x7fa2d81a5b00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:31.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.629+0000 7fa2dfffe640 1 -- 192.168.123.109:0/3721918138 shutdown_connections 2026-03-09T20:20:31.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.629+0000 7fa2dfffe640 1 -- 192.168.123.109:0/3721918138 >> 192.168.123.109:0/3721918138 conn(0x7fa2d8100120 msgr2=0x7fa2d8109200 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:31.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.630+0000 7fa2dfffe640 1 -- 192.168.123.109:0/3721918138 shutdown_connections 2026-03-09T20:20:31.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T20:20:31.630+0000 7fa2dfffe640 1 -- 192.168.123.109:0/3721918138 wait complete. 2026-03-09T20:20:31.778 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T20:20:31.778 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-09T20:20:31.778 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-09T20:20:31.811 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T20:20:31.811 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T20:20:31.811 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph mgr dump --format=json 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[51870]: from='client.24427 v1:192.168.123.109:0/31932430' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[51870]: Saving service alertmanager spec with placement vm05=a;count:1 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[51870]: from='client.24433 v1:192.168.123.109:0/3702305646' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[51870]: Saving service grafana spec with placement vm09=a;count:1 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1852294051' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[51870]: from='client.24422 ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[51870]: from='client.24422 ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[51870]: osdmap e56: 8 total, 8 up, 8 in 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[61345]: from='client.24427 v1:192.168.123.109:0/31932430' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[61345]: Saving service alertmanager spec with placement vm05=a;count:1 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[61345]: from='client.24433 v1:192.168.123.109:0/3702305646' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:31.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[61345]: Saving service grafana spec with placement vm09=a;count:1 2026-03-09T20:20:31.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1852294051' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:20:31.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[61345]: from='client.24422 ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:20:31.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[61345]: from='client.24422 ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T20:20:31.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T20:20:31.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T20:20:31.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[61345]: osdmap e56: 8 total, 8 up, 8 in 2026-03-09T20:20:31.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T20:20:31.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T20:20:31.974 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:32.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.128+0000 7feab6ee9640 1 -- 192.168.123.105:0/442608914 >> v1:192.168.123.105:6789/0 conn(0x7feab0077340 legacy=0x7feab00797e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:32.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.128+0000 7feab6ee9640 1 -- 192.168.123.105:0/442608914 shutdown_connections 2026-03-09T20:20:32.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.129+0000 7feab6ee9640 1 -- 192.168.123.105:0/442608914 >> 192.168.123.105:0/442608914 conn(0x7feab006d560 msgr2=0x7feab006d970 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.130+0000 7feab6ee9640 1 -- 192.168.123.105:0/442608914 shutdown_connections 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.130+0000 7feab6ee9640 1 -- 192.168.123.105:0/442608914 wait complete. 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.130+0000 7feab6ee9640 1 Processor -- start 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.130+0000 7feab6ee9640 1 -- start start 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.130+0000 7feab6ee9640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7feab010e5a0 con 0x7feab0136300 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.130+0000 7feab6ee9640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7feab01cae90 con 0x7feab0074040 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.130+0000 7feab6ee9640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7feab01cc070 con 0x7feab007af00 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.130+0000 7feab5ee7640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7feab0074040 0x7feab010dda0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:45926/0 (socket says 192.168.123.105:45926) 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.130+0000 7feab5ee7640 1 -- 192.168.123.105:0/2724976054 learned_addr learned my addr 192.168.123.105:0/2724976054 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.131+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2800553229 0 0) 0x7feab01cae90 con 0x7feab0074040 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.131+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fea8c003880 con 0x7feab0074040 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.131+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3143586679 0 0) 0x7fea8c003880 con 0x7feab0074040 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.131+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7feab01cae90 con 0x7feab0074040 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.131+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7feaac003080 con 0x7feab0074040 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.131+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1841770486 0 0) 0x7feab01cae90 con 0x7feab0074040 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.131+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 >> v1:192.168.123.105:6790/0 conn(0x7feab007af00 legacy=0x7feab0085500 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.131+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 >> v1:192.168.123.105:6789/0 conn(0x7feab0136300 legacy=0x7feab0085c10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:32.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.131+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7feab01cd250 con 0x7feab0074040 2026-03-09T20:20:32.131 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.131+0000 7feab6ee9640 1 -- 192.168.123.105:0/2724976054 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7feab01cc240 con 0x7feab0074040 2026-03-09T20:20:32.131 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.131+0000 7feab6ee9640 1 -- 192.168.123.105:0/2724976054 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7feab01cc7f0 con 0x7feab0074040 2026-03-09T20:20:32.131 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.132+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7feaac003220 con 0x7feab0074040 2026-03-09T20:20:32.131 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.132+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7feaac005d70 con 0x7feab0074040 2026-03-09T20:20:32.131 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.132+0000 7feab6ee9640 1 -- 192.168.123.105:0/2724976054 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7feab010a470 con 0x7feab0074040 2026-03-09T20:20:32.133 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.133+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7feaac005200 con 0x7feab0074040 2026-03-09T20:20:32.133 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.134+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(57..57 src has 1..57) ==== 5922+0+0 (unknown 2562478528 0 0) 0x7feaac094f90 con 0x7feab0074040 2026-03-09T20:20:32.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.136+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7feaac05e060 con 0x7feab0074040 2026-03-09T20:20:32.297 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.298+0000 7feab6ee9640 1 -- 192.168.123.105:0/2724976054 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "mgr dump", "format": "json"} v 0) -- 0x7feab0081e00 con 0x7feab0074040 2026-03-09T20:20:32.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.312+0000 7feaa6ffd640 1 -- 192.168.123.105:0/2724976054 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "mgr dump", "format": "json"}]=0 v15) ==== 74+0+191981 (unknown 170547878 0 1931499002) 0x7feaac061d10 con 0x7feab0074040 2026-03-09T20:20:32.312 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:20:32.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.315+0000 7feab6ee9640 1 -- 192.168.123.105:0/2724976054 >> v1:192.168.123.105:6800/3290461294 conn(0x7fea8c078330 legacy=0x7fea8c07a7f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:32.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.315+0000 7feab6ee9640 1 -- 192.168.123.105:0/2724976054 >> v1:192.168.123.109:6789/0 conn(0x7feab0074040 legacy=0x7feab010dda0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:32.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.315+0000 7feab6ee9640 1 -- 192.168.123.105:0/2724976054 shutdown_connections 2026-03-09T20:20:32.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.315+0000 7feab6ee9640 1 -- 192.168.123.105:0/2724976054 >> 192.168.123.105:0/2724976054 conn(0x7feab006d560 msgr2=0x7feab007d3c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:32.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.315+0000 7feab6ee9640 1 -- 192.168.123.105:0/2724976054 shutdown_connections 2026-03-09T20:20:32.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.315+0000 7feab6ee9640 1 -- 192.168.123.105:0/2724976054 wait complete. 2026-03-09T20:20:32.409 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 09 20:20:32 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-rgw-foo-a[86167]: 2026-03-09T20:20:32.197+0000 7f8e69495980 -1 LDAP not started since no server URIs were provided in the configuration. 2026-03-09T20:20:32.465 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":15,"flags":0,"active_gid":14150,"active_name":"y","active_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6800","nonce":3290461294}]},"active_addr":"192.168.123.105:6800/3290461294","active_change":"2026-03-09T20:18:17.477811+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24109,"name":"x","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.105:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":1175033727}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":4126256092}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":2579682170}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":1583317011}]}]} 2026-03-09T20:20:32.466 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T20:20:32.466 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T20:20:32.466 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd dump --format=json 2026-03-09T20:20:32.718 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[61345]: pgmap v113: 132 pgs: 88 active+clean, 17 creating+peering, 27 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 2.0 KiB/s wr, 7 op/s 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/3721918138' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[61345]: from='client.24442 ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[61345]: from='client.24442 ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[61345]: osdmap e57: 8 total, 8 up, 8 in 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2724976054' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[51870]: pgmap v113: 132 pgs: 88 active+clean, 17 creating+peering, 27 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 2.0 KiB/s wr, 7 op/s 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/3721918138' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[51870]: from='client.24442 ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:20:32.763 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[51870]: from='client.24442 ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T20:20:32.764 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T20:20:32.764 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T20:20:32.764 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[51870]: osdmap e57: 8 total, 8 up, 8 in 2026-03-09T20:20:32.764 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2724976054' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T20:20:32.764 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:32.764 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:32 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:32 vm09 ceph-mon[54524]: pgmap v113: 132 pgs: 88 active+clean, 17 creating+peering, 27 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 2.0 KiB/s wr, 7 op/s 2026-03-09T20:20:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:32 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.109:0/3721918138' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:20:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:32 vm09 ceph-mon[54524]: from='client.24442 ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T20:20:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:32 vm09 ceph-mon[54524]: from='client.24442 ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T20:20:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:32 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2713574341' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T20:20:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:32 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/27172436' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T20:20:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:32 vm09 ceph-mon[54524]: osdmap e57: 8 total, 8 up, 8 in 2026-03-09T20:20:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:32 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2724976054' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T20:20:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:32 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:32 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:32.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.917+0000 7fa95493c640 1 -- 192.168.123.105:0/1533803510 >> v1:192.168.123.105:6789/0 conn(0x7fa95010a720 legacy=0x7fa95010ab00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:32.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.918+0000 7fa95493c640 1 -- 192.168.123.105:0/1533803510 shutdown_connections 2026-03-09T20:20:32.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.918+0000 7fa95493c640 1 -- 192.168.123.105:0/1533803510 >> 192.168.123.105:0/1533803510 conn(0x7fa950100420 msgr2=0x7fa950102840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:32.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.918+0000 7fa95493c640 1 -- 192.168.123.105:0/1533803510 shutdown_connections 2026-03-09T20:20:32.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.918+0000 7fa95493c640 1 -- 192.168.123.105:0/1533803510 wait complete. 2026-03-09T20:20:32.918 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.918+0000 7fa95493c640 1 Processor -- start 2026-03-09T20:20:32.918 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.919+0000 7fa95493c640 1 -- start start 2026-03-09T20:20:32.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.919+0000 7fa95493c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa9501ab630 con 0x7fa95010a720 2026-03-09T20:20:32.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.919+0000 7fa95493c640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa9501ac830 con 0x7fa95010d5d0 2026-03-09T20:20:32.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.919+0000 7fa94f7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fa95010a720 0x7fa950110950 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:50566/0 (socket says 192.168.123.105:50566) 2026-03-09T20:20:32.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.919+0000 7fa94f7fe640 1 -- 192.168.123.105:0/3929496475 learned_addr learned my addr 192.168.123.105:0/3929496475 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:32.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.919+0000 7fa95493c640 1 -- 192.168.123.105:0/3929496475 --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa9501ada30 con 0x7fa950111170 2026-03-09T20:20:32.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.919+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 151887251 0 0) 0x7fa9501ab630 con 0x7fa95010a720 2026-03-09T20:20:32.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.920+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa924003620 con 0x7fa95010a720 2026-03-09T20:20:32.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.920+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 320602274 0 0) 0x7fa9501ada30 con 0x7fa950111170 2026-03-09T20:20:32.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.920+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa9501ab630 con 0x7fa950111170 2026-03-09T20:20:32.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.920+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2887415151 0 0) 0x7fa924003620 con 0x7fa95010a720 2026-03-09T20:20:32.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.920+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fa9501ada30 con 0x7fa95010a720 2026-03-09T20:20:32.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.920+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fa93c002d60 con 0x7fa95010a720 2026-03-09T20:20:32.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.920+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1222903861 0 0) 0x7fa9501ac830 con 0x7fa95010d5d0 2026-03-09T20:20:32.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.920+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa924003620 con 0x7fa95010d5d0 2026-03-09T20:20:32.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.920+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 40793983 0 0) 0x7fa9501ab630 con 0x7fa950111170 2026-03-09T20:20:32.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.920+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fa9501ac830 con 0x7fa950111170 2026-03-09T20:20:32.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.920+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fa944003020 con 0x7fa950111170 2026-03-09T20:20:32.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.921+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3202764622 0 0) 0x7fa9501ada30 con 0x7fa95010a720 2026-03-09T20:20:32.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.921+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 >> v1:192.168.123.105:6790/0 conn(0x7fa950111170 legacy=0x7fa9501a9d30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:32.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.921+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 >> v1:192.168.123.109:6789/0 conn(0x7fa95010d5d0 legacy=0x7fa9501a6460 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:32.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.921+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa9501aec30 con 0x7fa95010a720 2026-03-09T20:20:32.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.921+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fa93c003250 con 0x7fa95010a720 2026-03-09T20:20:32.921 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.921+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fa93c004f80 con 0x7fa95010a720 2026-03-09T20:20:32.921 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.921+0000 7fa95493c640 1 -- 192.168.123.105:0/3929496475 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fa9501ab860 con 0x7fa95010a720 2026-03-09T20:20:32.921 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.922+0000 7fa95493c640 1 -- 192.168.123.105:0/3929496475 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fa9501abe10 con 0x7fa95010a720 2026-03-09T20:20:32.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.923+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7fa93c005120 con 0x7fa95010a720 2026-03-09T20:20:32.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.924+0000 7fa95493c640 1 -- 192.168.123.105:0/3929496475 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa914005180 con 0x7fa95010a720 2026-03-09T20:20:32.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.924+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(57..57 src has 1..57) ==== 5922+0+0 (unknown 2562478528 0 0) 0x7fa93c093870 con 0x7fa95010a720 2026-03-09T20:20:32.926 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:32.927+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fa93c05e0a0 con 0x7fa95010a720 2026-03-09T20:20:33.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.043+0000 7fa95493c640 1 -- 192.168.123.105:0/3929496475 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7fa914005470 con 0x7fa95010a720 2026-03-09T20:20:33.045 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.046+0000 7fa94cff9640 1 -- 192.168.123.105:0/3929496475 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v57) ==== 74+0+20918 (unknown 2588859899 0 951626547) 0x7fa93c061d50 con 0x7fa95010a720 2026-03-09T20:20:33.045 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:20:33.046 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":57,"fsid":"c0151936-1bf4-11f1-b896-23f7bea8a6ea","created":"2026-03-09T20:17:54.449051+0000","modified":"2026-03-09T20:20:32.090260+0000","last_up_change":"2026-03-09T20:20:19.940896+0000","last_in_change":"2026-03-09T20:20:09.342882+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T20:19:21.537102+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-09T20:20:23.652059+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"52","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":52,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":".rgw.root","create_time":"2026-03-09T20:20:24.020998+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"51","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"default.rgw.log","create_time":"2026-03-09T20:20:25.170198+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"53","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.25,"score_stable":2.25,"optimal_score":1,"raw_score_acting":2.25,"raw_score_stable":2.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-09T20:20:27.112596+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"55","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-09T20:20:29.172006+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"57","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"35c6a684-ee69-44bf-83ae-27ddd2fd2486","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6801","nonce":1625499026}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6802","nonce":1625499026}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6804","nonce":1625499026}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6803","nonce":1625499026}]},"public_addr":"192.168.123.105:6801/1625499026","cluster_addr":"192.168.123.105:6802/1625499026","heartbeat_back_addr":"192.168.123.105:6804/1625499026","heartbeat_front_addr":"192.168.123.105:6803/1625499026","state":["exists","up"]},{"osd":1,"uuid":"4a3ff444-017e-44cd-9222-93f1d8dcc4db","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6805","nonce":3664200689}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6806","nonce":3664200689}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6808","nonce":3664200689}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6807","nonce":3664200689}]},"public_addr":"192.168.123.105:6805/3664200689","cluster_addr":"192.168.123.105:6806/3664200689","heartbeat_back_addr":"192.168.123.105:6808/3664200689","heartbeat_front_addr":"192.168.123.105:6807/3664200689","state":["exists","up"]},{"osd":2,"uuid":"58868a45-388a-4244-bde9-e525f4e2b7d5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6809","nonce":1060255430}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6810","nonce":1060255430}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6812","nonce":1060255430}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6811","nonce":1060255430}]},"public_addr":"192.168.123.105:6809/1060255430","cluster_addr":"192.168.123.105:6810/1060255430","heartbeat_back_addr":"192.168.123.105:6812/1060255430","heartbeat_front_addr":"192.168.123.105:6811/1060255430","state":["exists","up"]},{"osd":3,"uuid":"4c40929b-9b22-486e-aed2-a111cbaa96da","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6813","nonce":4176641888}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6814","nonce":4176641888}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6816","nonce":4176641888}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6815","nonce":4176641888}]},"public_addr":"192.168.123.105:6813/4176641888","cluster_addr":"192.168.123.105:6814/4176641888","heartbeat_back_addr":"192.168.123.105:6816/4176641888","heartbeat_front_addr":"192.168.123.105:6815/4176641888","state":["exists","up"]},{"osd":4,"uuid":"acddd4eb-0110-4992-a3c7-201ba9fd8f8e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":30,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6800","nonce":4063967321}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6801","nonce":4063967321}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6803","nonce":4063967321}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6802","nonce":4063967321}]},"public_addr":"192.168.123.109:6800/4063967321","cluster_addr":"192.168.123.109:6801/4063967321","heartbeat_back_addr":"192.168.123.109:6803/4063967321","heartbeat_front_addr":"192.168.123.109:6802/4063967321","state":["exists","up"]},{"osd":5,"uuid":"61fedd79-419a-4176-9825-9d059c9d73f0","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6804","nonce":3558334635}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6805","nonce":3558334635}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6807","nonce":3558334635}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6806","nonce":3558334635}]},"public_addr":"192.168.123.109:6804/3558334635","cluster_addr":"192.168.123.109:6805/3558334635","heartbeat_back_addr":"192.168.123.109:6807/3558334635","heartbeat_front_addr":"192.168.123.109:6806/3558334635","state":["exists","up"]},{"osd":6,"uuid":"d4965700-0e14-493b-8c85-282e7ba1da51","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":41,"up_thru":53,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6808","nonce":3079043049}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6809","nonce":3079043049}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6811","nonce":3079043049}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6810","nonce":3079043049}]},"public_addr":"192.168.123.109:6808/3079043049","cluster_addr":"192.168.123.109:6809/3079043049","heartbeat_back_addr":"192.168.123.109:6811/3079043049","heartbeat_front_addr":"192.168.123.109:6810/3079043049","state":["exists","up"]},{"osd":7,"uuid":"ae4f5298-3a65-4f5e-b653-7ee92ac3f2a9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":46,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6812","nonce":4141797613}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6813","nonce":4141797613}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6815","nonce":4141797613}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6814","nonce":4141797613}]},"public_addr":"192.168.123.109:6812/4141797613","cluster_addr":"192.168.123.109:6813/4141797613","heartbeat_back_addr":"192.168.123.109:6815/4141797613","heartbeat_front_addr":"192.168.123.109:6814/4141797613","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:18:55.880962+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:07.124566+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:18.432517+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:29.741981+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:42.702664+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:55.089147+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:20:07.303163+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:20:18.384854+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:6800/1901557444":"2026-03-10T20:18:17.477725+0000","192.168.123.105:0/4136016323":"2026-03-10T20:18:17.477725+0000","192.168.123.105:0/3703967877":"2026-03-10T20:18:17.477725+0000","192.168.123.105:0/4146364495":"2026-03-10T20:18:06.314330+0000","192.168.123.105:0/3832503883":"2026-03-10T20:18:06.314330+0000","192.168.123.105:0/3398073401":"2026-03-10T20:18:06.314330+0000","192.168.123.105:0/2964833350":"2026-03-10T20:18:17.477725+0000","192.168.123.105:6800/4277841438":"2026-03-10T20:18:06.314330+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T20:20:33.047 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.048+0000 7fa95493c640 1 -- 192.168.123.105:0/3929496475 >> v1:192.168.123.105:6800/3290461294 conn(0x7fa9240781a0 legacy=0x7fa92407a660 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:33.047 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.048+0000 7fa95493c640 1 -- 192.168.123.105:0/3929496475 >> v1:192.168.123.105:6789/0 conn(0x7fa95010a720 legacy=0x7fa950110950 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:33.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.049+0000 7fa95493c640 1 -- 192.168.123.105:0/3929496475 shutdown_connections 2026-03-09T20:20:33.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.049+0000 7fa95493c640 1 -- 192.168.123.105:0/3929496475 >> 192.168.123.105:0/3929496475 conn(0x7fa950100420 msgr2=0x7fa950114250 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:33.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.049+0000 7fa95493c640 1 -- 192.168.123.105:0/3929496475 shutdown_connections 2026-03-09T20:20:33.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.049+0000 7fa95493c640 1 -- 192.168.123.105:0/3929496475 wait complete. 2026-03-09T20:20:33.194 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T20:20:33.194 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd dump --format=json 2026-03-09T20:20:33.371 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:33.482 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:33 vm09 systemd[1]: Starting Ceph iscsi.iscsi.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:20:33.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.518+0000 7f4af532e640 1 -- 192.168.123.105:0/2527778557 >> v1:192.168.123.105:6790/0 conn(0x7f4af010d7f0 legacy=0x7f4af010fbe0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:33.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.519+0000 7f4af532e640 1 -- 192.168.123.105:0/2527778557 shutdown_connections 2026-03-09T20:20:33.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.519+0000 7f4af532e640 1 -- 192.168.123.105:0/2527778557 >> 192.168.123.105:0/2527778557 conn(0x7f4af0100620 msgr2=0x7f4af0102a40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:33.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.519+0000 7f4af532e640 1 -- 192.168.123.105:0/2527778557 shutdown_connections 2026-03-09T20:20:33.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.519+0000 7f4af532e640 1 -- 192.168.123.105:0/2527778557 wait complete. 2026-03-09T20:20:33.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.519+0000 7f4af532e640 1 Processor -- start 2026-03-09T20:20:33.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.519+0000 7f4af532e640 1 -- start start 2026-03-09T20:20:33.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.520+0000 7f4af532e640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4af0111030 con 0x7f4af010a940 2026-03-09T20:20:33.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.520+0000 7f4af532e640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4af01acde0 con 0x7f4af010d7f0 2026-03-09T20:20:33.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.520+0000 7f4af532e640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4af01adfc0 con 0x7f4af0111390 2026-03-09T20:20:33.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.520+0000 7f4aef7fe640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f4af0111390 0x7f4af01aa6b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:47432/0 (socket says 192.168.123.105:47432) 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.520+0000 7f4aef7fe640 1 -- 192.168.123.105:0/4197983907 learned_addr learned my addr 192.168.123.105:0/4197983907 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.520+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1256855176 0 0) 0x7f4af01adfc0 con 0x7f4af0111390 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.520+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4ac4003620 con 0x7f4af0111390 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.520+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2082956431 0 0) 0x7f4af01acde0 con 0x7f4af010d7f0 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.520+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4af01adfc0 con 0x7f4af010d7f0 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.521+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3215697058 0 0) 0x7f4af01adfc0 con 0x7f4af010d7f0 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.521+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f4af01acde0 con 0x7f4af010d7f0 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.521+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1020201189 0 0) 0x7f4ac4003620 con 0x7f4af0111390 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.521+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f4af01adfc0 con 0x7f4af0111390 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.521+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f4adc002d60 con 0x7f4af010d7f0 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.521+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f4ae00034a0 con 0x7f4af0111390 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.521+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2651816172 0 0) 0x7f4af01acde0 con 0x7f4af010d7f0 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.521+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 >> v1:192.168.123.105:6790/0 conn(0x7f4af0111390 legacy=0x7f4af01aa6b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:33.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.521+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 >> v1:192.168.123.105:6789/0 conn(0x7f4af010a940 legacy=0x7f4af010df10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:33.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.521+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4af01af1a0 con 0x7f4af010d7f0 2026-03-09T20:20:33.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.521+0000 7f4af532e640 1 -- 192.168.123.105:0/4197983907 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f4af01acfb0 con 0x7f4af010d7f0 2026-03-09T20:20:33.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.521+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f4adc003b40 con 0x7f4af010d7f0 2026-03-09T20:20:33.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.523+0000 7f4af532e640 1 -- 192.168.123.105:0/4197983907 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f4af01ad560 con 0x7f4af010d7f0 2026-03-09T20:20:33.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.523+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f4adc004d70 con 0x7f4af010d7f0 2026-03-09T20:20:33.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.523+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f4adc01d680 con 0x7f4af010d7f0 2026-03-09T20:20:33.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.524+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(57..57 src has 1..57) ==== 5922+0+0 (unknown 2562478528 0 0) 0x7f4adc003920 con 0x7f4af010d7f0 2026-03-09T20:20:33.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.524+0000 7f4af532e640 1 -- 192.168.123.105:0/4197983907 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4af0106080 con 0x7f4af010d7f0 2026-03-09T20:20:33.526 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.527+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f4adc05cd30 con 0x7f4af010d7f0 2026-03-09T20:20:33.654 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:33 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:33.654 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:33 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T20:20:33.654 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:33 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T20:20:33.654 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:33 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:33.654 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:33 vm05 ceph-mon[51870]: Deploying daemon iscsi.iscsi.a on vm09 2026-03-09T20:20:33.654 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3929496475' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:20:33.654 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:33 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:33.654 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:33 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T20:20:33.654 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:33 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T20:20:33.654 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:33 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:33.654 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:33 vm05 ceph-mon[61345]: Deploying daemon iscsi.iscsi.a on vm09 2026-03-09T20:20:33.654 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3929496475' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:20:33.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.654+0000 7f4af532e640 1 -- 192.168.123.105:0/4197983907 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f4af010f330 con 0x7f4af010d7f0 2026-03-09T20:20:33.659 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.659+0000 7f4acbfff640 1 -- 192.168.123.105:0/4197983907 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v57) ==== 74+0+20918 (unknown 2588859899 0 951626547) 0x7f4adc065920 con 0x7f4af010d7f0 2026-03-09T20:20:33.659 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:20:33.659 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":57,"fsid":"c0151936-1bf4-11f1-b896-23f7bea8a6ea","created":"2026-03-09T20:17:54.449051+0000","modified":"2026-03-09T20:20:32.090260+0000","last_up_change":"2026-03-09T20:20:19.940896+0000","last_in_change":"2026-03-09T20:20:09.342882+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T20:19:21.537102+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-09T20:20:23.652059+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"52","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":52,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":".rgw.root","create_time":"2026-03-09T20:20:24.020998+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"51","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"default.rgw.log","create_time":"2026-03-09T20:20:25.170198+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"53","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.25,"score_stable":2.25,"optimal_score":1,"raw_score_acting":2.25,"raw_score_stable":2.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-09T20:20:27.112596+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"55","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-09T20:20:29.172006+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"57","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"35c6a684-ee69-44bf-83ae-27ddd2fd2486","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6801","nonce":1625499026}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6802","nonce":1625499026}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6804","nonce":1625499026}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6803","nonce":1625499026}]},"public_addr":"192.168.123.105:6801/1625499026","cluster_addr":"192.168.123.105:6802/1625499026","heartbeat_back_addr":"192.168.123.105:6804/1625499026","heartbeat_front_addr":"192.168.123.105:6803/1625499026","state":["exists","up"]},{"osd":1,"uuid":"4a3ff444-017e-44cd-9222-93f1d8dcc4db","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6805","nonce":3664200689}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6806","nonce":3664200689}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6808","nonce":3664200689}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6807","nonce":3664200689}]},"public_addr":"192.168.123.105:6805/3664200689","cluster_addr":"192.168.123.105:6806/3664200689","heartbeat_back_addr":"192.168.123.105:6808/3664200689","heartbeat_front_addr":"192.168.123.105:6807/3664200689","state":["exists","up"]},{"osd":2,"uuid":"58868a45-388a-4244-bde9-e525f4e2b7d5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6809","nonce":1060255430}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6810","nonce":1060255430}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6812","nonce":1060255430}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6811","nonce":1060255430}]},"public_addr":"192.168.123.105:6809/1060255430","cluster_addr":"192.168.123.105:6810/1060255430","heartbeat_back_addr":"192.168.123.105:6812/1060255430","heartbeat_front_addr":"192.168.123.105:6811/1060255430","state":["exists","up"]},{"osd":3,"uuid":"4c40929b-9b22-486e-aed2-a111cbaa96da","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6813","nonce":4176641888}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6814","nonce":4176641888}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6816","nonce":4176641888}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6815","nonce":4176641888}]},"public_addr":"192.168.123.105:6813/4176641888","cluster_addr":"192.168.123.105:6814/4176641888","heartbeat_back_addr":"192.168.123.105:6816/4176641888","heartbeat_front_addr":"192.168.123.105:6815/4176641888","state":["exists","up"]},{"osd":4,"uuid":"acddd4eb-0110-4992-a3c7-201ba9fd8f8e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":30,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6800","nonce":4063967321}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6801","nonce":4063967321}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6803","nonce":4063967321}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6802","nonce":4063967321}]},"public_addr":"192.168.123.109:6800/4063967321","cluster_addr":"192.168.123.109:6801/4063967321","heartbeat_back_addr":"192.168.123.109:6803/4063967321","heartbeat_front_addr":"192.168.123.109:6802/4063967321","state":["exists","up"]},{"osd":5,"uuid":"61fedd79-419a-4176-9825-9d059c9d73f0","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6804","nonce":3558334635}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6805","nonce":3558334635}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6807","nonce":3558334635}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6806","nonce":3558334635}]},"public_addr":"192.168.123.109:6804/3558334635","cluster_addr":"192.168.123.109:6805/3558334635","heartbeat_back_addr":"192.168.123.109:6807/3558334635","heartbeat_front_addr":"192.168.123.109:6806/3558334635","state":["exists","up"]},{"osd":6,"uuid":"d4965700-0e14-493b-8c85-282e7ba1da51","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":41,"up_thru":53,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6808","nonce":3079043049}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6809","nonce":3079043049}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6811","nonce":3079043049}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6810","nonce":3079043049}]},"public_addr":"192.168.123.109:6808/3079043049","cluster_addr":"192.168.123.109:6809/3079043049","heartbeat_back_addr":"192.168.123.109:6811/3079043049","heartbeat_front_addr":"192.168.123.109:6810/3079043049","state":["exists","up"]},{"osd":7,"uuid":"ae4f5298-3a65-4f5e-b653-7ee92ac3f2a9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":46,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6812","nonce":4141797613}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6813","nonce":4141797613}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6815","nonce":4141797613}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6814","nonce":4141797613}]},"public_addr":"192.168.123.109:6812/4141797613","cluster_addr":"192.168.123.109:6813/4141797613","heartbeat_back_addr":"192.168.123.109:6815/4141797613","heartbeat_front_addr":"192.168.123.109:6814/4141797613","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:18:55.880962+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:07.124566+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:18.432517+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:29.741981+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:42.702664+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:19:55.089147+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:20:07.303163+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:20:18.384854+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:6800/1901557444":"2026-03-10T20:18:17.477725+0000","192.168.123.105:0/4136016323":"2026-03-10T20:18:17.477725+0000","192.168.123.105:0/3703967877":"2026-03-10T20:18:17.477725+0000","192.168.123.105:0/4146364495":"2026-03-10T20:18:06.314330+0000","192.168.123.105:0/3832503883":"2026-03-10T20:18:06.314330+0000","192.168.123.105:0/3398073401":"2026-03-10T20:18:06.314330+0000","192.168.123.105:0/2964833350":"2026-03-10T20:18:17.477725+0000","192.168.123.105:6800/4277841438":"2026-03-10T20:18:06.314330+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T20:20:33.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.662+0000 7f4ac9ffb640 1 -- 192.168.123.105:0/4197983907 >> v1:192.168.123.105:6800/3290461294 conn(0x7f4ac4078040 legacy=0x7f4ac407a500 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:33.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.662+0000 7f4ac9ffb640 1 -- 192.168.123.105:0/4197983907 >> v1:192.168.123.109:6789/0 conn(0x7f4af010d7f0 legacy=0x7f4af010e620 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:33.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.662+0000 7f4ac9ffb640 1 -- 192.168.123.105:0/4197983907 shutdown_connections 2026-03-09T20:20:33.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.662+0000 7f4ac9ffb640 1 -- 192.168.123.105:0/4197983907 >> 192.168.123.105:0/4197983907 conn(0x7f4af0100620 msgr2=0x7f4af010cdf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:33.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.663+0000 7f4ac9ffb640 1 -- 192.168.123.105:0/4197983907 shutdown_connections 2026-03-09T20:20:33.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:33.663+0000 7f4ac9ffb640 1 -- 192.168.123.105:0/4197983907 wait complete. 2026-03-09T20:20:33.743 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:33 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:33.743 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:33 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T20:20:33.743 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:33 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T20:20:33.743 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:33 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:33.743 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:33 vm09 ceph-mon[54524]: Deploying daemon iscsi.iscsi.a on vm09 2026-03-09T20:20:33.743 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3929496475' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:20:33.743 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:33 vm09 podman[79388]: 2026-03-09 20:20:33.505440269 +0000 UTC m=+0.018597433 container create 32c4c55b149612244236d3e5df1d169ce0b22d0e0eb31fa6da24d37596176732 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223) 2026-03-09T20:20:33.743 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:33 vm09 podman[79388]: 2026-03-09 20:20:33.549684209 +0000 UTC m=+0.062841373 container init 32c4c55b149612244236d3e5df1d169ce0b22d0e0eb31fa6da24d37596176732 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:20:33.743 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:33 vm09 podman[79388]: 2026-03-09 20:20:33.554466091 +0000 UTC m=+0.067623255 container start 32c4c55b149612244236d3e5df1d169ce0b22d0e0eb31fa6da24d37596176732 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid) 2026-03-09T20:20:33.743 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:33 vm09 bash[79388]: 32c4c55b149612244236d3e5df1d169ce0b22d0e0eb31fa6da24d37596176732 2026-03-09T20:20:33.743 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:33 vm09 podman[79388]: 2026-03-09 20:20:33.498179808 +0000 UTC m=+0.011336972 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:20:33.743 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:33 vm09 systemd[1]: Started Ceph iscsi.iscsi.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:20:33.817 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph tell osd.0 flush_pg_stats 2026-03-09T20:20:33.817 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph tell osd.1 flush_pg_stats 2026-03-09T20:20:33.817 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph tell osd.2 flush_pg_stats 2026-03-09T20:20:33.817 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph tell osd.3 flush_pg_stats 2026-03-09T20:20:33.817 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph tell osd.4 flush_pg_stats 2026-03-09T20:20:33.817 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph tell osd.5 flush_pg_stats 2026-03-09T20:20:33.817 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph tell osd.6 flush_pg_stats 2026-03-09T20:20:33.817 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph tell osd.7 flush_pg_stats 2026-03-09T20:20:34.019 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:33 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug Started the configuration object watcher 2026-03-09T20:20:34.019 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:33 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug Checking for config object changes every 1s 2026-03-09T20:20:34.019 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:33 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug Processing osd blocklist entries for this node 2026-03-09T20:20:34.019 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:34 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug Reading the configuration object to update local LIO configuration 2026-03-09T20:20:34.019 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:34 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug Configuration does not have an entry for this host(vm09.local) - nothing to define to LIO 2026-03-09T20:20:34.019 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:34 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-09T20:20:34.019 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:34 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: * Environment: production 2026-03-09T20:20:34.019 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:34 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T20:20:34.019 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:34 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: Use a production WSGI server instead. 2026-03-09T20:20:34.019 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:34 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: * Debug mode: off 2026-03-09T20:20:34.272 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:34 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug * Running on all addresses. 2026-03-09T20:20:34.272 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:34 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T20:20:34.272 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:34 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: * Running on all addresses. 2026-03-09T20:20:34.272 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:34 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T20:20:34.272 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:34 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T20:20:34.272 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:34 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T20:20:34.629 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:34.670 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:34.758 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[51870]: pgmap v115: 132 pgs: 105 active+clean, 17 creating+peering, 10 unknown; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 2.5 KiB/s wr, 42 op/s 2026-03-09T20:20:34.758 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:34.758 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:34.758 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:34.758 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:34.758 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4197983907' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:20:34.759 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.109:0/4079333989' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T20:20:34.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[61345]: pgmap v115: 132 pgs: 105 active+clean, 17 creating+peering, 10 unknown; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 2.5 KiB/s wr, 42 op/s 2026-03-09T20:20:34.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:34.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:34.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:34.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:34.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4197983907' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:20:34.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.109:0/4079333989' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T20:20:34.771 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:34 vm09 ceph-mon[54524]: pgmap v115: 132 pgs: 105 active+clean, 17 creating+peering, 10 unknown; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 2.5 KiB/s wr, 42 op/s 2026-03-09T20:20:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:34 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:34 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:34 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:34 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4197983907' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:20:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.109:0/4079333989' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T20:20:34.776 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:34.783 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:34.784 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:34.957 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:35.163 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.162+0000 7fd7a079b640 1 -- 192.168.123.105:0/869038718 >> v1:192.168.123.105:6790/0 conn(0x7fd798074230 legacy=0x7fd798074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.163 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.162+0000 7fd7a079b640 1 -- 192.168.123.105:0/869038718 shutdown_connections 2026-03-09T20:20:35.163 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.162+0000 7fd7a079b640 1 -- 192.168.123.105:0/869038718 >> 192.168.123.105:0/869038718 conn(0x7fd79806e900 msgr2=0x7fd79806ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.162+0000 7fd7a079b640 1 -- 192.168.123.105:0/869038718 shutdown_connections 2026-03-09T20:20:35.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.162+0000 7fd7a079b640 1 -- 192.168.123.105:0/869038718 wait complete. 2026-03-09T20:20:35.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.163+0000 7fd7a079b640 1 Processor -- start 2026-03-09T20:20:35.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.163+0000 7fd7a079b640 1 -- start start 2026-03-09T20:20:35.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.163+0000 7fd7a079b640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd79813a200 con 0x7fd7980772b0 2026-03-09T20:20:35.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.163+0000 7fd7a079b640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd79813b400 con 0x7fd79807ae70 2026-03-09T20:20:35.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.163+0000 7fd7a079b640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd79813c600 con 0x7fd798136460 2026-03-09T20:20:35.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.163+0000 7fd79e510640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fd7980772b0 0x7fd79810afc0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:50612/0 (socket says 192.168.123.105:50612) 2026-03-09T20:20:35.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.163+0000 7fd79e510640 1 -- 192.168.123.105:0/1484940784 learned_addr learned my addr 192.168.123.105:0/1484940784 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:35.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.165+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 595101415 0 0) 0x7fd79813a200 con 0x7fd7980772b0 2026-03-09T20:20:35.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.165+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd770003620 con 0x7fd7980772b0 2026-03-09T20:20:35.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.165+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3716513136 0 0) 0x7fd79813c600 con 0x7fd798136460 2026-03-09T20:20:35.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.165+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd79813a200 con 0x7fd798136460 2026-03-09T20:20:35.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.166+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4239606477 0 0) 0x7fd79813b400 con 0x7fd79807ae70 2026-03-09T20:20:35.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.166+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd79813c600 con 0x7fd79807ae70 2026-03-09T20:20:35.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.166+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2538289670 0 0) 0x7fd79813a200 con 0x7fd798136460 2026-03-09T20:20:35.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.166+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd79813b400 con 0x7fd798136460 2026-03-09T20:20:35.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.166+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2592445260 0 0) 0x7fd770003620 con 0x7fd7980772b0 2026-03-09T20:20:35.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.166+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd79813a200 con 0x7fd7980772b0 2026-03-09T20:20:35.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.166+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 313739682 0 0) 0x7fd79813c600 con 0x7fd79807ae70 2026-03-09T20:20:35.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.166+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd770003620 con 0x7fd79807ae70 2026-03-09T20:20:35.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.166+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fd788002f70 con 0x7fd798136460 2026-03-09T20:20:35.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.166+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fd794003210 con 0x7fd7980772b0 2026-03-09T20:20:35.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.166+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fd7900032b0 con 0x7fd79807ae70 2026-03-09T20:20:35.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.166+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 782541910 0 0) 0x7fd79813b400 con 0x7fd798136460 2026-03-09T20:20:35.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.166+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 >> v1:192.168.123.109:6789/0 conn(0x7fd79807ae70 legacy=0x7fd79810b6d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.167+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 >> v1:192.168.123.105:6789/0 conn(0x7fd7980772b0 legacy=0x7fd79810afc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.167+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd79813d800 con 0x7fd798136460 2026-03-09T20:20:35.171 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.167+0000 7fd7a079b640 1 -- 192.168.123.105:0/1484940784 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fd79813c830 con 0x7fd798136460 2026-03-09T20:20:35.171 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.167+0000 7fd7a079b640 1 -- 192.168.123.105:0/1484940784 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7fd79813cd90 con 0x7fd798136460 2026-03-09T20:20:35.171 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.171+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fd788003950 con 0x7fd798136460 2026-03-09T20:20:35.171 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.171+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fd788004840 con 0x7fd798136460 2026-03-09T20:20:35.172 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.173+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7fd788004ac0 con 0x7fd798136460 2026-03-09T20:20:35.175 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.176+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(57..57 src has 1..57) ==== 5922+0+0 (unknown 2562478528 0 0) 0x7fd7880948b0 con 0x7fd798136460 2026-03-09T20:20:35.176 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.176+0000 7fd7a079b640 1 -- 192.168.123.105:0/1484940784 --> v1:192.168.123.105:6805/3664200689 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7fd7680053a0 con 0x7fd768001630 2026-03-09T20:20:35.177 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.178+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== osd.1 v1:192.168.123.105:6805/3664200689 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7fd7680053a0 con 0x7fd768001630 2026-03-09T20:20:35.194 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.194+0000 7fd7a079b640 1 -- 192.168.123.105:0/1484940784 --> v1:192.168.123.105:6805/3664200689 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7fd768007130 con 0x7fd768001630 2026-03-09T20:20:35.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.195+0000 7fd78f7fe640 1 -- 192.168.123.105:0/1484940784 <== osd.1 v1:192.168.123.105:6805/3664200689 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (unknown 0 0 3536998211) 0x7fd768007130 con 0x7fd768001630 2026-03-09T20:20:35.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.196+0000 7fd78d7fa640 1 -- 192.168.123.105:0/1484940784 >> v1:192.168.123.105:6805/3664200689 conn(0x7fd768001630 legacy=0x7fd768003af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.196+0000 7fd78d7fa640 1 -- 192.168.123.105:0/1484940784 >> v1:192.168.123.105:6800/3290461294 conn(0x7fd770078980 legacy=0x7fd77007ae40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.197+0000 7fd78d7fa640 1 -- 192.168.123.105:0/1484940784 >> v1:192.168.123.105:6790/0 conn(0x7fd798136460 legacy=0x7fd798138900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.197 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.197+0000 7fd79ed11640 1 -- 192.168.123.105:0/1484940784 reap_dead start 2026-03-09T20:20:35.197 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.197+0000 7fd78d7fa640 1 -- 192.168.123.105:0/1484940784 shutdown_connections 2026-03-09T20:20:35.197 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.197+0000 7fd78d7fa640 1 -- 192.168.123.105:0/1484940784 >> 192.168.123.105:0/1484940784 conn(0x7fd79806e900 msgr2=0x7fd79810f2f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.210 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.209+0000 7fd78d7fa640 1 -- 192.168.123.105:0/1484940784 shutdown_connections 2026-03-09T20:20:35.210 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.211+0000 7fd78d7fa640 1 -- 192.168.123.105:0/1484940784 wait complete. 2026-03-09T20:20:35.221 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:35.317 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.317+0000 7efcb21ad640 1 -- 192.168.123.105:0/905679692 >> v1:192.168.123.105:6789/0 conn(0x7efcac11a770 legacy=0x7efcac11cb60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.317 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.318+0000 7efcb21ad640 1 -- 192.168.123.105:0/905679692 shutdown_connections 2026-03-09T20:20:35.317 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.318+0000 7efcb21ad640 1 -- 192.168.123.105:0/905679692 >> 192.168.123.105:0/905679692 conn(0x7efcac06e900 msgr2=0x7efcac06ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.317 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.318+0000 7efcb21ad640 1 -- 192.168.123.105:0/905679692 shutdown_connections 2026-03-09T20:20:35.318 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.318+0000 7efcb21ad640 1 -- 192.168.123.105:0/905679692 wait complete. 2026-03-09T20:20:35.318 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.319+0000 7efcb21ad640 1 Processor -- start 2026-03-09T20:20:35.318 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.319+0000 7efcb21ad640 1 -- start start 2026-03-09T20:20:35.318 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.319+0000 7efcb21ad640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7efcac1b8700 con 0x7efcac1b4b00 2026-03-09T20:20:35.318 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.319+0000 7efcb21ad640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7efcac1b9900 con 0x7efcac11e280 2026-03-09T20:20:35.318 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.319+0000 7efcb21ad640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7efcac1bab00 con 0x7efcac074230 2026-03-09T20:20:35.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.319+0000 7efcabfff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7efcac1b4b00 0x7efcac1b6ef0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:50636/0 (socket says 192.168.123.105:50636) 2026-03-09T20:20:35.320 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.319+0000 7efcabfff640 1 -- 192.168.123.105:0/2078652106 learned_addr learned my addr 192.168.123.105:0/2078652106 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:35.322 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.322+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2524228251 0 0) 0x7efcac1b9900 con 0x7efcac11e280 2026-03-09T20:20:35.322 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.323+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7efc74003620 con 0x7efcac11e280 2026-03-09T20:20:35.322 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.323+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4232870496 0 0) 0x7efcac1b8700 con 0x7efcac1b4b00 2026-03-09T20:20:35.322 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.323+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7efcac1b9900 con 0x7efcac1b4b00 2026-03-09T20:20:35.323 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.323+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3582668059 0 0) 0x7efcac1bab00 con 0x7efcac074230 2026-03-09T20:20:35.323 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.324+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7efcac1b8700 con 0x7efcac074230 2026-03-09T20:20:35.323 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.324+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2446844100 0 0) 0x7efc74003620 con 0x7efcac11e280 2026-03-09T20:20:35.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.325+0000 7f4afd317640 1 -- 192.168.123.105:0/3311210612 >> v1:192.168.123.105:6789/0 conn(0x7f4af8074230 legacy=0x7f4af8074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.325+0000 7f4afd317640 1 -- 192.168.123.105:0/3311210612 shutdown_connections 2026-03-09T20:20:35.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.325+0000 7f4afd317640 1 -- 192.168.123.105:0/3311210612 >> 192.168.123.105:0/3311210612 conn(0x7f4af806e900 msgr2=0x7f4af806ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.326+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7efcac1bab00 con 0x7efcac11e280 2026-03-09T20:20:35.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.326+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 346376392 0 0) 0x7efcac1b9900 con 0x7efcac1b4b00 2026-03-09T20:20:35.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.326+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7efc74003620 con 0x7efcac1b4b00 2026-03-09T20:20:35.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.326+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 842820831 0 0) 0x7efcac1b8700 con 0x7efcac074230 2026-03-09T20:20:35.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.325+0000 7f4afd317640 1 -- 192.168.123.105:0/3311210612 shutdown_connections 2026-03-09T20:20:35.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.325+0000 7f4afd317640 1 -- 192.168.123.105:0/3311210612 wait complete. 2026-03-09T20:20:35.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.326+0000 7f4afd317640 1 Processor -- start 2026-03-09T20:20:35.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.326+0000 7f4afd317640 1 -- start start 2026-03-09T20:20:35.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.326+0000 7f4afd317640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4af8086120 con 0x7f4af80772b0 2026-03-09T20:20:35.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.326+0000 7f4afd317640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4af80862f0 con 0x7f4af8085c70 2026-03-09T20:20:35.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.326+0000 7f4afd317640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4af80864c0 con 0x7f4af807ae70 2026-03-09T20:20:35.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.326+0000 7f4af67fc640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f4af807ae70 0x7f4af8085560 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:47500/0 (socket says 192.168.123.105:47500) 2026-03-09T20:20:35.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.326+0000 7f4af67fc640 1 -- 192.168.123.105:0/2676168354 learned_addr learned my addr 192.168.123.105:0/2676168354 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:35.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.327+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3082206961 0 0) 0x7f4af80864c0 con 0x7f4af807ae70 2026-03-09T20:20:35.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.327+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4ad0003620 con 0x7f4af807ae70 2026-03-09T20:20:35.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.327+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7efcac1b9900 con 0x7efcac074230 2026-03-09T20:20:35.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.327+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7efc94002f50 con 0x7efcac11e280 2026-03-09T20:20:35.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.327+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3945012652 0 0) 0x7f4af80862f0 con 0x7f4af8085c70 2026-03-09T20:20:35.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.328+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4af80864c0 con 0x7f4af8085c70 2026-03-09T20:20:35.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.328+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3005227186 0 0) 0x7f4af8086120 con 0x7f4af80772b0 2026-03-09T20:20:35.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.328+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4af80862f0 con 0x7f4af80772b0 2026-03-09T20:20:35.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.328+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7efc9c0034a0 con 0x7efcac1b4b00 2026-03-09T20:20:35.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.328+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 374175348 0 0) 0x7f4ad0003620 con 0x7f4af807ae70 2026-03-09T20:20:35.329 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.328+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f4af8086120 con 0x7f4af807ae70 2026-03-09T20:20:35.329 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.330+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3236601587 0 0) 0x7f4af80864c0 con 0x7f4af8085c70 2026-03-09T20:20:35.329 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.330+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f4ad0003620 con 0x7f4af8085c70 2026-03-09T20:20:35.329 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.330+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 119547357 0 0) 0x7f4af80862f0 con 0x7f4af80772b0 2026-03-09T20:20:35.330 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.330+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f4af80864c0 con 0x7f4af80772b0 2026-03-09T20:20:35.330 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.330+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f4ae8003050 con 0x7f4af807ae70 2026-03-09T20:20:35.330 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.330+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f4aec002f70 con 0x7f4af8085c70 2026-03-09T20:20:35.330 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.330+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f4af0003500 con 0x7f4af80772b0 2026-03-09T20:20:35.330 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.330+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2064947306 0 0) 0x7f4af8086120 con 0x7f4af807ae70 2026-03-09T20:20:35.330 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.330+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 >> v1:192.168.123.109:6789/0 conn(0x7f4af8085c70 legacy=0x7f4af81bffe0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.330 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.330+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 >> v1:192.168.123.105:6789/0 conn(0x7f4af80772b0 legacy=0x7f4af80837d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.330 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.329+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7efca0002f70 con 0x7efcac074230 2026-03-09T20:20:35.330 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.331+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4af81c4730 con 0x7f4af807ae70 2026-03-09T20:20:35.331 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.331+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 4114106637 0 0) 0x7efcac1bab00 con 0x7efcac11e280 2026-03-09T20:20:35.331 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.331+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 >> v1:192.168.123.105:6790/0 conn(0x7efcac074230 legacy=0x7efcac10df80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.331 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.332+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 >> v1:192.168.123.105:6789/0 conn(0x7efcac1b4b00 legacy=0x7efcac1b6ef0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.331 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.332+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7efcac1bbd00 con 0x7efcac11e280 2026-03-09T20:20:35.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.333+0000 7efcb21ad640 1 -- 192.168.123.105:0/2078652106 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7efcac1b9b30 con 0x7efcac11e280 2026-03-09T20:20:35.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.333+0000 7efcb21ad640 1 -- 192.168.123.105:0/2078652106 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7efcac1ba110 con 0x7efcac11e280 2026-03-09T20:20:35.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.335+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7efc940039e0 con 0x7efcac11e280 2026-03-09T20:20:35.339 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.339+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7efc94004d50 con 0x7efcac11e280 2026-03-09T20:20:35.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.331+0000 7f4afd317640 1 -- 192.168.123.105:0/2676168354 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f4af81c1700 con 0x7f4af807ae70 2026-03-09T20:20:35.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.331+0000 7f4afd317640 1 -- 192.168.123.105:0/2676168354 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f4af81c3e70 con 0x7f4af807ae70 2026-03-09T20:20:35.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.341+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f4ae8003b60 con 0x7f4af807ae70 2026-03-09T20:20:35.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.341+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f4ae8005b50 con 0x7f4af807ae70 2026-03-09T20:20:35.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.342+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f4ae8006e00 con 0x7f4af807ae70 2026-03-09T20:20:35.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.344+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(57..57 src has 1..57) ==== 5922+0+0 (unknown 2562478528 0 0) 0x7f4ae8095cb0 con 0x7f4af807ae70 2026-03-09T20:20:35.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.346+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7efc9401d680 con 0x7efcac11e280 2026-03-09T20:20:35.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.347+0000 7f4afd317640 1 -- 192.168.123.105:0/2676168354 --> v1:192.168.123.105:6813/4176641888 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f4af8071f20 con 0x7f4af8076700 2026-03-09T20:20:35.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.348+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== osd.3 v1:192.168.123.105:6813/4176641888 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7f4aec007780 con 0x7f4af8076700 2026-03-09T20:20:35.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.354+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(57..57 src has 1..57) ==== 5922+0+0 (unknown 2562478528 0 0) 0x7efc940948e0 con 0x7efcac11e280 2026-03-09T20:20:35.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.355+0000 7efc8a7fc640 1 -- 192.168.123.105:0/2078652106 --> v1:192.168.123.105:6801/1625499026 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7efc780053a0 con 0x7efc78001630 2026-03-09T20:20:35.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.359+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== osd.0 v1:192.168.123.105:6801/1625499026 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7efc780053a0 con 0x7efc78001630 2026-03-09T20:20:35.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.362+0000 7f4afd317640 1 -- 192.168.123.105:0/2676168354 --> v1:192.168.123.105:6813/4176641888 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f4af810a470 con 0x7f4af8076700 2026-03-09T20:20:35.371 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.371+0000 7f4ad7fff640 1 -- 192.168.123.105:0/2676168354 <== osd.3 v1:192.168.123.105:6813/4176641888 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (unknown 0 0 874641167) 0x7f4af810a470 con 0x7f4af8076700 2026-03-09T20:20:35.375 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.375+0000 7f4ad5ffb640 1 -- 192.168.123.105:0/2676168354 >> v1:192.168.123.105:6813/4176641888 conn(0x7f4af8076700 legacy=0x7f4af80630c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.375 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.375+0000 7f4ad5ffb640 1 -- 192.168.123.105:0/2676168354 >> v1:192.168.123.105:6800/3290461294 conn(0x7f4ad0078970 legacy=0x7f4ad007ae30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.375 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.375+0000 7f4ad5ffb640 1 -- 192.168.123.105:0/2676168354 >> v1:192.168.123.105:6790/0 conn(0x7f4af807ae70 legacy=0x7f4af8085560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.376+0000 7f4af77fe640 1 -- 192.168.123.105:0/2676168354 reap_dead start 2026-03-09T20:20:35.384 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.380+0000 7f4ad5ffb640 1 -- 192.168.123.105:0/2676168354 shutdown_connections 2026-03-09T20:20:35.384 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.380+0000 7f4ad5ffb640 1 -- 192.168.123.105:0/2676168354 >> 192.168.123.105:0/2676168354 conn(0x7f4af806e900 msgr2=0x7f4af810f2f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.384 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.380+0000 7f4ad5ffb640 1 -- 192.168.123.105:0/2676168354 shutdown_connections 2026-03-09T20:20:35.384 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.380+0000 7f4ad5ffb640 1 -- 192.168.123.105:0/2676168354 wait complete. 2026-03-09T20:20:35.421 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.421+0000 7efcb21ad640 1 -- 192.168.123.105:0/2078652106 --> v1:192.168.123.105:6801/1625499026 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7efc78007130 con 0x7efc78001630 2026-03-09T20:20:35.421 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.421+0000 7efca8ff9640 1 -- 192.168.123.105:0/2078652106 <== osd.0 v1:192.168.123.105:6801/1625499026 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (unknown 0 0 865349217) 0x7efc78007130 con 0x7efc78001630 2026-03-09T20:20:35.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.427+0000 7efcb21ad640 1 -- 192.168.123.105:0/2078652106 >> v1:192.168.123.105:6801/1625499026 conn(0x7efc78001630 legacy=0x7efc78003af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.428+0000 7efcb21ad640 1 -- 192.168.123.105:0/2078652106 >> v1:192.168.123.105:6800/3290461294 conn(0x7efc740789d0 legacy=0x7efc7407ae90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.428+0000 7efcb21ad640 1 -- 192.168.123.105:0/2078652106 >> v1:192.168.123.109:6789/0 conn(0x7efcac11e280 legacy=0x7efcac1b33e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.428+0000 7efcabfff640 1 -- 192.168.123.105:0/2078652106 reap_dead start 2026-03-09T20:20:35.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.428+0000 7efcb21ad640 1 -- 192.168.123.105:0/2078652106 shutdown_connections 2026-03-09T20:20:35.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.428+0000 7efcb21ad640 1 -- 192.168.123.105:0/2078652106 >> 192.168.123.105:0/2078652106 conn(0x7efcac06e900 msgr2=0x7efcac072bd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.428+0000 7efcb21ad640 1 -- 192.168.123.105:0/2078652106 shutdown_connections 2026-03-09T20:20:35.428 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.428+0000 7efcb21ad640 1 -- 192.168.123.105:0/2078652106 wait complete. 2026-03-09T20:20:35.439 INFO:teuthology.orchestra.run.vm05.stdout:51539607571 2026-03-09T20:20:35.439 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.1 2026-03-09T20:20:35.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.456+0000 7f05f4d1b640 1 -- 192.168.123.105:0/292546599 >> v1:192.168.123.105:6789/0 conn(0x7f05f0074230 legacy=0x7f05f0074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.463+0000 7f05f4d1b640 1 -- 192.168.123.105:0/292546599 shutdown_connections 2026-03-09T20:20:35.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.463+0000 7f05f4d1b640 1 -- 192.168.123.105:0/292546599 >> 192.168.123.105:0/292546599 conn(0x7f05f006e900 msgr2=0x7f05f006ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.463+0000 7f05f4d1b640 1 -- 192.168.123.105:0/292546599 shutdown_connections 2026-03-09T20:20:35.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.464+0000 7f05f4d1b640 1 -- 192.168.123.105:0/292546599 wait complete. 2026-03-09T20:20:35.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.464+0000 7f05f4d1b640 1 Processor -- start 2026-03-09T20:20:35.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.465+0000 7f05f4d1b640 1 -- start start 2026-03-09T20:20:35.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.465+0000 7f05f4d1b640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f05f01b8410 con 0x7f05f011e280 2026-03-09T20:20:35.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.465+0000 7f05f4d1b640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f05f01b9610 con 0x7f05f0074230 2026-03-09T20:20:35.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.465+0000 7f05f4d1b640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f05f01ba810 con 0x7f05f011a770 2026-03-09T20:20:35.467 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.465+0000 7f05eed76640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f05f011e280 0x7f05f01b6b10 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:50662/0 (socket says 192.168.123.105:50662) 2026-03-09T20:20:35.467 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.465+0000 7f05eed76640 1 -- 192.168.123.105:0/3566592933 learned_addr learned my addr 192.168.123.105:0/3566592933 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:35.467 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.467+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3373481152 0 0) 0x7f05f01b8410 con 0x7f05f011e280 2026-03-09T20:20:35.467 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.467+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f05c8003620 con 0x7f05f011e280 2026-03-09T20:20:35.467 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.467+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1536487949 0 0) 0x7f05f01ba810 con 0x7f05f011a770 2026-03-09T20:20:35.467 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.467+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f05f01b8410 con 0x7f05f011a770 2026-03-09T20:20:35.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.470+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3574852369 0 0) 0x7f05f01b9610 con 0x7f05f0074230 2026-03-09T20:20:35.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.470+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f05f01ba810 con 0x7f05f0074230 2026-03-09T20:20:35.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.470+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 919804978 0 0) 0x7f05f01b8410 con 0x7f05f011a770 2026-03-09T20:20:35.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.470+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f05f01b9610 con 0x7f05f011a770 2026-03-09T20:20:35.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.471+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2321114272 0 0) 0x7f05c8003620 con 0x7f05f011e280 2026-03-09T20:20:35.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.471+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f05f01b8410 con 0x7f05f011e280 2026-03-09T20:20:35.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.471+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 495300737 0 0) 0x7f05f01ba810 con 0x7f05f0074230 2026-03-09T20:20:35.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.471+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f05c8003620 con 0x7f05f0074230 2026-03-09T20:20:35.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.471+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f05e40030c0 con 0x7f05f011a770 2026-03-09T20:20:35.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.471+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f05e0003440 con 0x7f05f011e280 2026-03-09T20:20:35.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.471+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f05dc003cd0 con 0x7f05f0074230 2026-03-09T20:20:35.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.471+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1869706109 0 0) 0x7f05f01b9610 con 0x7f05f011a770 2026-03-09T20:20:35.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.471+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 >> v1:192.168.123.109:6789/0 conn(0x7f05f0074230 legacy=0x7f05f010e070 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.471+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 >> v1:192.168.123.105:6789/0 conn(0x7f05f011e280 legacy=0x7f05f01b6b10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.471+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f05f01bba10 con 0x7f05f011a770 2026-03-09T20:20:35.483 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.474+0000 7f05f4d1b640 1 -- 192.168.123.105:0/3566592933 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f05f01baa40 con 0x7f05f011a770 2026-03-09T20:20:35.483 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.474+0000 7f05f4d1b640 1 -- 192.168.123.105:0/3566592933 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f05f01bafd0 con 0x7f05f011a770 2026-03-09T20:20:35.483 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.483+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f05e4003aa0 con 0x7f05f011a770 2026-03-09T20:20:35.483 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.483+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f05e4004950 con 0x7f05f011a770 2026-03-09T20:20:35.483 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.483+0000 7f05f4d1b640 1 -- 192.168.123.105:0/3566592933 --> v1:192.168.123.105:6790/0 -- mon_get_version(what=osdmap handle=1) -- 0x7f05f010a470 con 0x7f05f011a770 2026-03-09T20:20:35.483 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.484+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f05e4005c20 con 0x7f05f011a770 2026-03-09T20:20:35.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.485+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(57..57 src has 1..57) ==== 5922+0+0 (unknown 2562478528 0 0) 0x7f05e4094670 con 0x7f05f011a770 2026-03-09T20:20:35.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.485+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 --> v1:192.168.123.109:6800/4063967321 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f05c8086950 con 0x7f05c8082c80 2026-03-09T20:20:35.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.485+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_get_version_reply(handle=1 version=57) ==== 24+0+0 (unknown 3002131633 0 0) 0x7f05e4094a40 con 0x7f05f011a770 2026-03-09T20:20:35.486 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.487+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== osd.4 v1:192.168.123.109:6800/4063967321 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7f05c8086950 con 0x7f05c8082c80 2026-03-09T20:20:35.568 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:35 vm05 ceph-mon[51870]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T20:20:35.568 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:35 vm05 ceph-mon[51870]: Deploying daemon prometheus.a on vm09 2026-03-09T20:20:35.568 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:35 vm05 ceph-mon[61345]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T20:20:35.568 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:35 vm05 ceph-mon[61345]: Deploying daemon prometheus.a on vm09 2026-03-09T20:20:35.583 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.583+0000 7f0baf65f640 1 -- 192.168.123.105:0/3197668469 >> v1:192.168.123.105:6789/0 conn(0x7f0ba8074230 legacy=0x7f0ba8074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.585 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.584+0000 7f0baf65f640 1 -- 192.168.123.105:0/3197668469 shutdown_connections 2026-03-09T20:20:35.585 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.584+0000 7f0baf65f640 1 -- 192.168.123.105:0/3197668469 >> 192.168.123.105:0/3197668469 conn(0x7f0ba806e900 msgr2=0x7f0ba806ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.585 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.585+0000 7f0baf65f640 1 -- 192.168.123.105:0/3197668469 shutdown_connections 2026-03-09T20:20:35.585 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.585+0000 7f0baf65f640 1 -- 192.168.123.105:0/3197668469 wait complete. 2026-03-09T20:20:35.585 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.586+0000 7f0baf65f640 1 Processor -- start 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.586+0000 7f0baf65f640 1 -- start start 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.586+0000 7f0baf65f640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f0ba81b84f0 con 0x7f0ba811e280 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.586+0000 7f0baf65f640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f0ba81b96f0 con 0x7f0ba8074230 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.586+0000 7f0baf65f640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f0ba81ba8f0 con 0x7f0ba811a770 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.586+0000 7f0badbd5640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f0ba811e280 0x7f0ba81b6bf0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:50692/0 (socket says 192.168.123.105:50692) 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.586+0000 7f0badbd5640 1 -- 192.168.123.105:0/1127058808 learned_addr learned my addr 192.168.123.105:0/1127058808 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.587+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2791591910 0 0) 0x7f0ba81b84f0 con 0x7f0ba811e280 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.587+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0b80003620 con 0x7f0ba811e280 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.587+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2624791095 0 0) 0x7f0ba81ba8f0 con 0x7f0ba811a770 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.587+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0ba81b84f0 con 0x7f0ba811a770 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.587+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4224133282 0 0) 0x7f0ba81b96f0 con 0x7f0ba8074230 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.587+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0ba81ba8f0 con 0x7f0ba8074230 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.587+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 359909200 0 0) 0x7f0b80003620 con 0x7f0ba811e280 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.587+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f0ba81b96f0 con 0x7f0ba811e280 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.587+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f0ba4003410 con 0x7f0ba811e280 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.587+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2446780697 0 0) 0x7f0ba81b84f0 con 0x7f0ba811a770 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.587+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f0b80003620 con 0x7f0ba811a770 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.587+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1345640887 0 0) 0x7f0ba81b96f0 con 0x7f0ba811e280 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.588+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 >> v1:192.168.123.105:6790/0 conn(0x7f0ba811a770 legacy=0x7f0ba81b33e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.588+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 >> v1:192.168.123.109:6789/0 conn(0x7f0ba8074230 legacy=0x7f0ba810e150 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.588+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0ba81bbaf0 con 0x7f0ba811e280 2026-03-09T20:20:35.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.588+0000 7f0baf65f640 1 -- 192.168.123.105:0/1127058808 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f0ba81b8720 con 0x7f0ba811e280 2026-03-09T20:20:35.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.588+0000 7f0baf65f640 1 -- 192.168.123.105:0/1127058808 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f0ba81b8d00 con 0x7f0ba811e280 2026-03-09T20:20:35.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.589+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f0ba4003db0 con 0x7f0ba811e280 2026-03-09T20:20:35.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.589+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f0ba4005e20 con 0x7f0ba811e280 2026-03-09T20:20:35.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.599+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f0ba40070d0 con 0x7f0ba811e280 2026-03-09T20:20:35.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.598+0000 7f797e83c640 1 -- 192.168.123.105:0/3138290759 >> v1:192.168.123.109:6789/0 conn(0x7f797811e280 legacy=0x7f7978120670 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.601 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.602+0000 7f797e83c640 1 -- 192.168.123.105:0/3138290759 shutdown_connections 2026-03-09T20:20:35.601 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.602+0000 7f797e83c640 1 -- 192.168.123.105:0/3138290759 >> 192.168.123.105:0/3138290759 conn(0x7f797806e900 msgr2=0x7f797806ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.601 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.602+0000 7f797e83c640 1 -- 192.168.123.105:0/3138290759 shutdown_connections 2026-03-09T20:20:35.601 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.602+0000 7f797e83c640 1 -- 192.168.123.105:0/3138290759 wait complete. 2026-03-09T20:20:35.602 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.603+0000 7f797e83c640 1 Processor -- start 2026-03-09T20:20:35.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.602+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(57..57 src has 1..57) ==== 5922+0+0 (unknown 2562478528 0 0) 0x7f0ba4095e60 con 0x7f0ba811e280 2026-03-09T20:20:35.613 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.603+0000 7f0b77fff640 1 -- 192.168.123.105:0/1127058808 --> v1:192.168.123.109:6808/3079043049 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f0b700053a0 con 0x7f0b70001630 2026-03-09T20:20:35.613 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.607+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 <== osd.6 v1:192.168.123.109:6808/3079043049 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7f0b700053a0 con 0x7f0b70001630 2026-03-09T20:20:35.620 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.603+0000 7f797e83c640 1 -- start start 2026-03-09T20:20:35.620 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.603+0000 7f797e83c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f79781b8580 con 0x7f797811e280 2026-03-09T20:20:35.620 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.603+0000 7f797e83c640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f79781b9780 con 0x7f7978074230 2026-03-09T20:20:35.620 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.603+0000 7f797e83c640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f79781ba980 con 0x7f797811a770 2026-03-09T20:20:35.620 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.604+0000 7f797cdb2640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f797811e280 0x7f79781b6c80 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:50698/0 (socket says 192.168.123.105:50698) 2026-03-09T20:20:35.620 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.604+0000 7f797cdb2640 1 -- 192.168.123.105:0/3168317683 learned_addr learned my addr 192.168.123.105:0/3168317683 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:35.621 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.621+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3172712513 0 0) 0x7f79781b9780 con 0x7f7978074230 2026-03-09T20:20:35.621 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.622+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f794c003620 con 0x7f7978074230 2026-03-09T20:20:35.621 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.620+0000 7f05f4d1b640 1 -- 192.168.123.105:0/3566592933 --> v1:192.168.123.109:6800/4063967321 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f05f010a470 con 0x7f05c8082c80 2026-03-09T20:20:35.621 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.621+0000 7f05db7fe640 1 -- 192.168.123.105:0/3566592933 <== osd.4 v1:192.168.123.109:6800/4063967321 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (unknown 0 0 2642843322) 0x7f05f010a470 con 0x7f05c8082c80 2026-03-09T20:20:35.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.622+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 153966690 0 0) 0x7f79781ba980 con 0x7f797811a770 2026-03-09T20:20:35.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.622+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f79781b9780 con 0x7f797811a770 2026-03-09T20:20:35.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.622+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2285309743 0 0) 0x7f79781b8580 con 0x7f797811e280 2026-03-09T20:20:35.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.622+0000 7f05f4d1b640 1 -- 192.168.123.105:0/3566592933 >> v1:192.168.123.109:6800/4063967321 conn(0x7f05c8082c80 legacy=0x7f05c80850e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.622+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f79781ba980 con 0x7f797811e280 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.627+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4178462343 0 0) 0x7f79781ba980 con 0x7f797811e280 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.627+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f79781b8580 con 0x7f797811e280 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.627+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f7968003580 con 0x7f797811e280 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.627+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 78091906 0 0) 0x7f794c003620 con 0x7f7978074230 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.627+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f79781ba980 con 0x7f7978074230 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.627+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3425159941 0 0) 0x7f79781b9780 con 0x7f797811a770 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.627+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f794c003620 con 0x7f797811a770 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.627+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1332993233 0 0) 0x7f79781b8580 con 0x7f797811e280 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.627+0000 7f05f4d1b640 1 -- 192.168.123.105:0/3566592933 >> v1:192.168.123.105:6800/3290461294 conn(0x7f05c80785e0 legacy=0x7f05c807aaa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.627+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 >> v1:192.168.123.105:6790/0 conn(0x7f797811a770 legacy=0x7f79781b33e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.628+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 >> v1:192.168.123.109:6789/0 conn(0x7f7978074230 legacy=0x7f797810e1e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.628+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f79781bbb80 con 0x7f797811e280 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.628+0000 7f797e83c640 1 -- 192.168.123.105:0/3168317683 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f79781b99b0 con 0x7f797811e280 2026-03-09T20:20:35.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.628+0000 7f797e83c640 1 -- 192.168.123.105:0/3168317683 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f79781b9f10 con 0x7f797811e280 2026-03-09T20:20:35.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.629+0000 7f05f4d1b640 1 -- 192.168.123.105:0/3566592933 >> v1:192.168.123.105:6790/0 conn(0x7f05f011a770 legacy=0x7f05f01b33e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.629+0000 7f05eed76640 1 -- 192.168.123.105:0/3566592933 reap_dead start 2026-03-09T20:20:35.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.629+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f7968002b70 con 0x7f797811e280 2026-03-09T20:20:35.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.630+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f79680052b0 con 0x7f797811e280 2026-03-09T20:20:35.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.632+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f7968005470 con 0x7f797811e280 2026-03-09T20:20:35.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.633+0000 7f05f4d1b640 1 -- 192.168.123.105:0/3566592933 shutdown_connections 2026-03-09T20:20:35.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.633+0000 7f05f4d1b640 1 -- 192.168.123.105:0/3566592933 >> 192.168.123.105:0/3566592933 conn(0x7f05f006e900 msgr2=0x7f05f0110910 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.633+0000 7f05f4d1b640 1 -- 192.168.123.105:0/3566592933 shutdown_connections 2026-03-09T20:20:35.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.633+0000 7f05f4d1b640 1 -- 192.168.123.105:0/3566592933 wait complete. 2026-03-09T20:20:35.639 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.638+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(57..57 src has 1..57) ==== 5922+0+0 (unknown 2562478528 0 0) 0x7f7968096170 con 0x7f797811e280 2026-03-09T20:20:35.645 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.643+0000 7f797e83c640 1 -- 192.168.123.105:0/3168317683 --> v1:192.168.123.105:6809/1060255430 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f79440053a0 con 0x7f7944001630 2026-03-09T20:20:35.645 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.644+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== osd.2 v1:192.168.123.105:6809/1060255430 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7f79440053a0 con 0x7f7944001630 2026-03-09T20:20:35.672 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.670+0000 7f797e83c640 1 -- 192.168.123.105:0/3168317683 --> v1:192.168.123.105:6809/1060255430 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f7944007130 con 0x7f7944001630 2026-03-09T20:20:35.672 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.671+0000 7f79757fa640 1 -- 192.168.123.105:0/3168317683 <== osd.2 v1:192.168.123.105:6809/1060255430 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (unknown 0 0 2147188623) 0x7f7944007130 con 0x7f7944001630 2026-03-09T20:20:35.672 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.671+0000 7f797e83c640 1 -- 192.168.123.105:0/3168317683 >> v1:192.168.123.105:6809/1060255430 conn(0x7f7944001630 legacy=0x7f7944003af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.672 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.671+0000 7f797e83c640 1 -- 192.168.123.105:0/3168317683 >> v1:192.168.123.105:6800/3290461294 conn(0x7f794c078660 legacy=0x7f794c07ab20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.672 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.672+0000 7f797e83c640 1 -- 192.168.123.105:0/3168317683 >> v1:192.168.123.105:6789/0 conn(0x7f797811e280 legacy=0x7f79781b6c80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.673+0000 7f797cdb2640 1 -- 192.168.123.105:0/3168317683 reap_dead start 2026-03-09T20:20:35.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.674+0000 7f797e83c640 1 -- 192.168.123.105:0/3168317683 shutdown_connections 2026-03-09T20:20:35.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.674+0000 7f797e83c640 1 -- 192.168.123.105:0/3168317683 >> 192.168.123.105:0/3168317683 conn(0x7f797806e900 msgr2=0x7f7978110910 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.674+0000 7f797e83c640 1 -- 192.168.123.105:0/3168317683 shutdown_connections 2026-03-09T20:20:35.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.674+0000 7f797e83c640 1 -- 192.168.123.105:0/3168317683 wait complete. 2026-03-09T20:20:35.688 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.687+0000 7f263481a640 1 -- 192.168.123.105:0/4040913560 >> v1:192.168.123.105:6789/0 conn(0x7f262c074230 legacy=0x7f262c074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.695+0000 7f263481a640 1 -- 192.168.123.105:0/4040913560 shutdown_connections 2026-03-09T20:20:35.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.695+0000 7f263481a640 1 -- 192.168.123.105:0/4040913560 >> 192.168.123.105:0/4040913560 conn(0x7f262c06e900 msgr2=0x7f262c06ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.695+0000 7f263481a640 1 -- 192.168.123.105:0/4040913560 shutdown_connections 2026-03-09T20:20:35.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.695+0000 7f263481a640 1 -- 192.168.123.105:0/4040913560 wait complete. 2026-03-09T20:20:35.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.695+0000 7f263481a640 1 Processor -- start 2026-03-09T20:20:35.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.695+0000 7f263481a640 1 -- start start 2026-03-09T20:20:35.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.695+0000 7f263481a640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f262c10e500 con 0x7f262c11a770 2026-03-09T20:20:35.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.695+0000 7f263481a640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f262c10e6d0 con 0x7f262c074230 2026-03-09T20:20:35.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.695+0000 7f263481a640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f262c1c37f0 con 0x7f262c11e280 2026-03-09T20:20:35.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.696+0000 7f2631d8e640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f262c11a770 0x7f262c10bbf0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:50720/0 (socket says 192.168.123.105:50720) 2026-03-09T20:20:35.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.696+0000 7f2631d8e640 1 -- 192.168.123.105:0/3003932058 learned_addr learned my addr 192.168.123.105:0/3003932058 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:35.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.696+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1690136157 0 0) 0x7f262c10e500 con 0x7f262c11a770 2026-03-09T20:20:35.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.706+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2610003620 con 0x7f262c11a770 2026-03-09T20:20:35.707 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.706+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1301124324 0 0) 0x7f262c1c37f0 con 0x7f262c11e280 2026-03-09T20:20:35.708 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.706+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f262c10e500 con 0x7f262c11e280 2026-03-09T20:20:35.708 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.706+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2105616842 0 0) 0x7f262c10e6d0 con 0x7f262c074230 2026-03-09T20:20:35.708 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.706+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f262c1c37f0 con 0x7f262c074230 2026-03-09T20:20:35.708 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.706+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2926238014 0 0) 0x7f2610003620 con 0x7f262c11a770 2026-03-09T20:20:35.708 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.706+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f262c10e6d0 con 0x7f262c11a770 2026-03-09T20:20:35.708 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.706+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f26240030b0 con 0x7f262c11a770 2026-03-09T20:20:35.708 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.706+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 97995091 0 0) 0x7f262c10e500 con 0x7f262c11e280 2026-03-09T20:20:35.708 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.707+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f2610003620 con 0x7f262c11e280 2026-03-09T20:20:35.708 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.707+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3253124400 0 0) 0x7f262c10e6d0 con 0x7f262c11a770 2026-03-09T20:20:35.708 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.707+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 >> v1:192.168.123.105:6790/0 conn(0x7f262c11e280 legacy=0x7f262c119d70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.708 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.707+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 >> v1:192.168.123.109:6789/0 conn(0x7f262c074230 legacy=0x7f262c10b4e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.708 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.707+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f262c1c49d0 con 0x7f262c11a770 2026-03-09T20:20:35.712 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.707+0000 7f263481a640 1 -- 192.168.123.105:0/3003932058 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f262c1c39c0 con 0x7f262c11a770 2026-03-09T20:20:35.712 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.708+0000 7f263481a640 1 -- 192.168.123.105:0/3003932058 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f262c1c3f50 con 0x7f262c11a770 2026-03-09T20:20:35.712 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.709+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f2624003b60 con 0x7f262c11a770 2026-03-09T20:20:35.712 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.710+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f2624005bc0 con 0x7f262c11a770 2026-03-09T20:20:35.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.714+0000 7f0b77fff640 1 -- 192.168.123.105:0/1127058808 --> v1:192.168.123.109:6808/3079043049 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f0b70007130 con 0x7f0b70001630 2026-03-09T20:20:35.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.715+0000 7f0b967fc640 1 -- 192.168.123.105:0/1127058808 <== osd.6 v1:192.168.123.109:6808/3079043049 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (unknown 0 0 1566799740) 0x7f0b70007130 con 0x7f0b70001630 2026-03-09T20:20:35.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.715+0000 7f0b77fff640 1 -- 192.168.123.105:0/1127058808 >> v1:192.168.123.109:6808/3079043049 conn(0x7f0b70001630 legacy=0x7f0b70003af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.715+0000 7f0b77fff640 1 -- 192.168.123.105:0/1127058808 >> v1:192.168.123.105:6800/3290461294 conn(0x7f0b80078400 legacy=0x7f0b8007a8c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.716+0000 7f0b77fff640 1 -- 192.168.123.105:0/1127058808 >> v1:192.168.123.105:6789/0 conn(0x7f0ba811e280 legacy=0x7f0ba81b6bf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.716+0000 7f0badbd5640 1 -- 192.168.123.105:0/1127058808 reap_dead start 2026-03-09T20:20:35.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.717+0000 7f0b77fff640 1 -- 192.168.123.105:0/1127058808 shutdown_connections 2026-03-09T20:20:35.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.717+0000 7f0b77fff640 1 -- 192.168.123.105:0/1127058808 >> 192.168.123.105:0/1127058808 conn(0x7f0ba806e900 msgr2=0x7f0ba8072bd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.718 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.718+0000 7f0b77fff640 1 -- 192.168.123.105:0/1127058808 shutdown_connections 2026-03-09T20:20:35.718 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.718+0000 7f0b77fff640 1 -- 192.168.123.105:0/1127058808 wait complete. 2026-03-09T20:20:35.718 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.718+0000 7f9311cc6640 1 -- 192.168.123.105:0/2120179032 >> v1:192.168.123.105:6790/0 conn(0x7f930c074230 legacy=0x7f930c074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.723 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.720+0000 7f9311cc6640 1 -- 192.168.123.105:0/2120179032 shutdown_connections 2026-03-09T20:20:35.723 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.720+0000 7f9311cc6640 1 -- 192.168.123.105:0/2120179032 >> 192.168.123.105:0/2120179032 conn(0x7f930c06e900 msgr2=0x7f930c06ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.723 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.721+0000 7f9311cc6640 1 -- 192.168.123.105:0/2120179032 shutdown_connections 2026-03-09T20:20:35.723 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.721+0000 7f9311cc6640 1 -- 192.168.123.105:0/2120179032 wait complete. 2026-03-09T20:20:35.723 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.721+0000 7f9311cc6640 1 Processor -- start 2026-03-09T20:20:35.723 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.721+0000 7f9311cc6640 1 -- start start 2026-03-09T20:20:35.723 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.721+0000 7f9311cc6640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f930c1c0fc0 con 0x7f930c11e280 2026-03-09T20:20:35.723 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.721+0000 7f9311cc6640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f930c1c21c0 con 0x7f930c11a770 2026-03-09T20:20:35.723 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.721+0000 7f9311cc6640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f930c1c33c0 con 0x7f930c074230 2026-03-09T20:20:35.723 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.722+0000 7f930b7fe640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f930c074230 0x7f930c10e230 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:47586/0 (socket says 192.168.123.105:47586) 2026-03-09T20:20:35.723 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.722+0000 7f930b7fe640 1 -- 192.168.123.105:0/539950048 learned_addr learned my addr 192.168.123.105:0/539950048 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:35.723 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.723+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2737531411 0 0) 0x7f930c1c0fc0 con 0x7f930c11e280 2026-03-09T20:20:35.723 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.720+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f2624006e50 con 0x7f262c11a770 2026-03-09T20:20:35.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.723+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(57..57 src has 1..57) ==== 5922+0+0 (unknown 2562478528 0 0) 0x7f26240950f0 con 0x7f262c11a770 2026-03-09T20:20:35.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.724+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f92e4003620 con 0x7f930c11e280 2026-03-09T20:20:35.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.724+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1219728772 0 0) 0x7f930c1c33c0 con 0x7f930c074230 2026-03-09T20:20:35.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.725+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f930c1c0fc0 con 0x7f930c074230 2026-03-09T20:20:35.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.725+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2603749536 0 0) 0x7f930c1c21c0 con 0x7f930c11a770 2026-03-09T20:20:35.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.725+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f930c1c33c0 con 0x7f930c11a770 2026-03-09T20:20:35.735 INFO:teuthology.orchestra.run.vm05.stdout:34359738389 2026-03-09T20:20:35.735 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.0 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.731+0000 7f261d7fa640 1 -- 192.168.123.105:0/3003932058 --> v1:192.168.123.109:6804/3558334635 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f25f40053a0 con 0x7f25f4001630 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.732+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 <== osd.5 v1:192.168.123.109:6804/3558334635 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7f25f40053a0 con 0x7f25f4001630 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.730+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3619576290 0 0) 0x7f92e4003620 con 0x7f930c11e280 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.730+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f930c1c21c0 con 0x7f930c11e280 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.730+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 315005891 0 0) 0x7f930c1c0fc0 con 0x7f930c074230 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.730+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f92e4003620 con 0x7f930c074230 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.730+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3055783009 0 0) 0x7f930c1c33c0 con 0x7f930c11a770 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.730+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f930c1c0fc0 con 0x7f930c11a770 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.730+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f9304003340 con 0x7f930c11e280 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.730+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f92f4003240 con 0x7f930c074230 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.730+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f92f80030c0 con 0x7f930c11a770 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.730+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3936640501 0 0) 0x7f92e4003620 con 0x7f930c074230 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.730+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 >> v1:192.168.123.109:6789/0 conn(0x7f930c11a770 legacy=0x7f930c118d60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.730+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 >> v1:192.168.123.105:6789/0 conn(0x7f930c11e280 legacy=0x7f930c1bf7b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.730+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f930c1c45c0 con 0x7f930c074230 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.731+0000 7f9311cc6640 1 -- 192.168.123.105:0/539950048 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f930c1c35f0 con 0x7f930c074230 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.731+0000 7f9311cc6640 1 -- 192.168.123.105:0/539950048 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f930c1c3b80 con 0x7f930c074230 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.731+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f92f40047e0 con 0x7f930c074230 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.733+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f92f4004c80 con 0x7f930c074230 2026-03-09T20:20:35.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.733+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7f92f4004f00 con 0x7f930c074230 2026-03-09T20:20:35.756 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.756+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(57..57 src has 1..57) ==== 5922+0+0 (unknown 2562478528 0 0) 0x7f92f4095c00 con 0x7f930c074230 2026-03-09T20:20:35.757 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.757+0000 7f9311cc6640 1 -- 192.168.123.105:0/539950048 --> v1:192.168.123.109:6812/4141797613 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f92d40053a0 con 0x7f92d4001630 2026-03-09T20:20:35.761 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.760+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== osd.7 v1:192.168.123.109:6812/4141797613 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7f92d40053a0 con 0x7f92d4001630 2026-03-09T20:20:35.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:35 vm09 ceph-mon[54524]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T20:20:35.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:35 vm09 ceph-mon[54524]: Deploying daemon prometheus.a on vm09 2026-03-09T20:20:35.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.773+0000 7f261d7fa640 1 -- 192.168.123.105:0/3003932058 --> v1:192.168.123.109:6804/3558334635 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f25f4007130 con 0x7f25f4001630 2026-03-09T20:20:35.776 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.775+0000 7f261f7fe640 1 -- 192.168.123.105:0/3003932058 <== osd.5 v1:192.168.123.109:6804/3558334635 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (unknown 0 0 572138344) 0x7f25f4007130 con 0x7f25f4001630 2026-03-09T20:20:35.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.784+0000 7f9311cc6640 1 -- 192.168.123.105:0/539950048 --> v1:192.168.123.109:6812/4141797613 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f92d4007130 con 0x7f92d4001630 2026-03-09T20:20:35.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.789+0000 7f93097fa640 1 -- 192.168.123.105:0/539950048 <== osd.7 v1:192.168.123.109:6812/4141797613 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (unknown 0 0 1577574006) 0x7f92d4007130 con 0x7f92d4001630 2026-03-09T20:20:35.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.791+0000 7f263481a640 1 -- 192.168.123.105:0/3003932058 >> v1:192.168.123.109:6804/3558334635 conn(0x7f25f4001630 legacy=0x7f25f4003af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.791+0000 7f263481a640 1 -- 192.168.123.105:0/3003932058 >> v1:192.168.123.105:6800/3290461294 conn(0x7f26100784e0 legacy=0x7f261007a9a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.791+0000 7f263481a640 1 -- 192.168.123.105:0/3003932058 >> v1:192.168.123.105:6789/0 conn(0x7f262c11a770 legacy=0x7f262c10bbf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.791+0000 7f2632d90640 1 -- 192.168.123.105:0/3003932058 reap_dead start 2026-03-09T20:20:35.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.792+0000 7f263481a640 1 -- 192.168.123.105:0/3003932058 shutdown_connections 2026-03-09T20:20:35.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.792+0000 7f263481a640 1 -- 192.168.123.105:0/3003932058 >> 192.168.123.105:0/3003932058 conn(0x7f262c06e900 msgr2=0x7f262c072bd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.792+0000 7f263481a640 1 -- 192.168.123.105:0/3003932058 shutdown_connections 2026-03-09T20:20:35.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.792+0000 7f263481a640 1 -- 192.168.123.105:0/3003932058 wait complete. 2026-03-09T20:20:35.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.801+0000 7f93027fc640 1 -- 192.168.123.105:0/539950048 >> v1:192.168.123.109:6812/4141797613 conn(0x7f92d4001630 legacy=0x7f92d4003af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.801+0000 7f93027fc640 1 -- 192.168.123.105:0/539950048 >> v1:192.168.123.105:6800/3290461294 conn(0x7f92e4078760 legacy=0x7f92e407ac20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.806 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.802+0000 7f93027fc640 1 -- 192.168.123.105:0/539950048 >> v1:192.168.123.105:6790/0 conn(0x7f930c074230 legacy=0x7f930c10e230 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:35.813 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.811+0000 7f930bfff640 1 -- 192.168.123.105:0/539950048 reap_dead start 2026-03-09T20:20:35.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.821+0000 7f93027fc640 1 -- 192.168.123.105:0/539950048 shutdown_connections 2026-03-09T20:20:35.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.821+0000 7f93027fc640 1 -- 192.168.123.105:0/539950048 >> 192.168.123.105:0/539950048 conn(0x7f930c06e900 msgr2=0x7f930c072bd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:35.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.822+0000 7f93027fc640 1 -- 192.168.123.105:0/539950048 shutdown_connections 2026-03-09T20:20:35.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:35.822+0000 7f93027fc640 1 -- 192.168.123.105:0/539950048 wait complete. 2026-03-09T20:20:35.902 INFO:teuthology.orchestra.run.vm05.stdout:128849018892 2026-03-09T20:20:35.903 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.4 2026-03-09T20:20:35.907 INFO:teuthology.orchestra.run.vm05.stdout:107374182414 2026-03-09T20:20:35.907 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.3 2026-03-09T20:20:35.944 INFO:teuthology.orchestra.run.vm05.stdout:73014444048 2026-03-09T20:20:35.944 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.2 2026-03-09T20:20:36.020 INFO:teuthology.orchestra.run.vm05.stdout:176093659143 2026-03-09T20:20:36.020 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.6 2026-03-09T20:20:36.038 INFO:teuthology.orchestra.run.vm05.stdout:154618822665 2026-03-09T20:20:36.038 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.5 2026-03-09T20:20:36.061 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:36.085 INFO:teuthology.orchestra.run.vm05.stdout:197568495620 2026-03-09T20:20:36.086 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.7 2026-03-09T20:20:36.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.309+0000 7fb89d08f640 1 -- 192.168.123.105:0/3849125198 >> v1:192.168.123.105:6789/0 conn(0x7fb898077340 legacy=0x7fb8980797e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:36.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.309+0000 7fb89d08f640 1 -- 192.168.123.105:0/3849125198 shutdown_connections 2026-03-09T20:20:36.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.309+0000 7fb89d08f640 1 -- 192.168.123.105:0/3849125198 >> 192.168.123.105:0/3849125198 conn(0x7fb89806d560 msgr2=0x7fb89806d970 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:36.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.309+0000 7fb89d08f640 1 -- 192.168.123.105:0/3849125198 shutdown_connections 2026-03-09T20:20:36.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.310+0000 7fb89d08f640 1 -- 192.168.123.105:0/3849125198 wait complete. 2026-03-09T20:20:36.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.310+0000 7fb89d08f640 1 Processor -- start 2026-03-09T20:20:36.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.310+0000 7fb89d08f640 1 -- start start 2026-03-09T20:20:36.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.310+0000 7fb89d08f640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb898086130 con 0x7fb89807af00 2026-03-09T20:20:36.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.310+0000 7fb89d08f640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb898086300 con 0x7fb898074040 2026-03-09T20:20:36.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.310+0000 7fb89d08f640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb8981c36b0 con 0x7fb898085c10 2026-03-09T20:20:36.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.313+0000 7fb897fff640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7fb898074040 0x7fb89810df70 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:46182/0 (socket says 192.168.123.105:46182) 2026-03-09T20:20:36.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.313+0000 7fb897fff640 1 -- 192.168.123.105:0/2441678391 learned_addr learned my addr 192.168.123.105:0/2441678391 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:36.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.310+0000 7fb8977fe640 1 --1- 192.168.123.105:0/2441678391 >> v1:192.168.123.105:6789/0 conn(0x7fb89807af00 0x7fb898085500 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:50744/0 (socket says 192.168.123.105:50744) 2026-03-09T20:20:36.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.313+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3826346408 0 0) 0x7fb898086130 con 0x7fb89807af00 2026-03-09T20:20:36.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.313+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb87c003600 con 0x7fb89807af00 2026-03-09T20:20:36.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.313+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3874132237 0 0) 0x7fb8981c36b0 con 0x7fb898085c10 2026-03-09T20:20:36.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.314+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb898086130 con 0x7fb898085c10 2026-03-09T20:20:36.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.314+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2681084380 0 0) 0x7fb898086300 con 0x7fb898074040 2026-03-09T20:20:36.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.314+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb8981c36b0 con 0x7fb898074040 2026-03-09T20:20:36.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.314+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1551654051 0 0) 0x7fb87c003600 con 0x7fb89807af00 2026-03-09T20:20:36.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.314+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fb898086300 con 0x7fb89807af00 2026-03-09T20:20:36.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.314+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2880208335 0 0) 0x7fb898086130 con 0x7fb898085c10 2026-03-09T20:20:36.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.314+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fb87c003600 con 0x7fb898085c10 2026-03-09T20:20:36.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.314+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fb890003500 con 0x7fb89807af00 2026-03-09T20:20:36.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.315+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fb88c002f70 con 0x7fb898085c10 2026-03-09T20:20:36.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.315+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3963013933 0 0) 0x7fb8981c36b0 con 0x7fb898074040 2026-03-09T20:20:36.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.315+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fb898086130 con 0x7fb898074040 2026-03-09T20:20:36.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.315+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 274541548 0 0) 0x7fb898086300 con 0x7fb89807af00 2026-03-09T20:20:36.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.315+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 >> v1:192.168.123.105:6790/0 conn(0x7fb898085c10 legacy=0x7fb8981bff70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:36.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.315+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 >> v1:192.168.123.109:6789/0 conn(0x7fb898074040 legacy=0x7fb89810df70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:36.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.315+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb8981c4890 con 0x7fb89807af00 2026-03-09T20:20:36.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.315+0000 7fb89d08f640 1 -- 192.168.123.105:0/2441678391 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fb8981c38e0 con 0x7fb89807af00 2026-03-09T20:20:36.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.315+0000 7fb89d08f640 1 -- 192.168.123.105:0/2441678391 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fb8981c3df0 con 0x7fb89807af00 2026-03-09T20:20:36.316 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.316+0000 7fb89d08f640 1 -- 192.168.123.105:0/2441678391 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb864005180 con 0x7fb89807af00 2026-03-09T20:20:36.316 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.316+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fb890004890 con 0x7fb89807af00 2026-03-09T20:20:36.316 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.316+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fb890004fe0 con 0x7fb89807af00 2026-03-09T20:20:36.320 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.320+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 4267040124 0 0) 0x7fb8900051a0 con 0x7fb89807af00 2026-03-09T20:20:36.320 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.321+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7fb890095f70 con 0x7fb89807af00 2026-03-09T20:20:36.320 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.321+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fb890096410 con 0x7fb89807af00 2026-03-09T20:20:36.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.434+0000 7fb89d08f640 1 -- 192.168.123.105:0/2441678391 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) -- 0x7fb864005470 con 0x7fb89807af00 2026-03-09T20:20:36.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.436+0000 7fb8957fa640 1 -- 192.168.123.105:0/2441678391 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) ==== 74+0+12 (unknown 832126871 0 21906937) 0x7fb890004fc0 con 0x7fb89807af00 2026-03-09T20:20:36.435 INFO:teuthology.orchestra.run.vm05.stdout:51539607569 2026-03-09T20:20:36.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.439+0000 7fb89d08f640 1 -- 192.168.123.105:0/2441678391 >> v1:192.168.123.105:6800/3290461294 conn(0x7fb87c078890 legacy=0x7fb87c07ad50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:36.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.439+0000 7fb89d08f640 1 -- 192.168.123.105:0/2441678391 >> v1:192.168.123.105:6789/0 conn(0x7fb89807af00 legacy=0x7fb898085500 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:36.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.439+0000 7fb89d08f640 1 -- 192.168.123.105:0/2441678391 shutdown_connections 2026-03-09T20:20:36.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.439+0000 7fb89d08f640 1 -- 192.168.123.105:0/2441678391 >> 192.168.123.105:0/2441678391 conn(0x7fb89806d560 msgr2=0x7fb89807d3c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:36.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.439+0000 7fb89d08f640 1 -- 192.168.123.105:0/2441678391 shutdown_connections 2026-03-09T20:20:36.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.439+0000 7fb89d08f640 1 -- 192.168.123.105:0/2441678391 wait complete. 2026-03-09T20:20:36.470 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:36.601 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:36 vm05 ceph-mon[51870]: pgmap v116: 132 pgs: 124 active+clean, 8 creating+peering; 454 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 74 KiB/s rd, 5.8 KiB/s wr, 179 op/s 2026-03-09T20:20:36.601 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:36 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:36.602 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:36 vm05 ceph-mon[51870]: osdmap e58: 8 total, 8 up, 8 in 2026-03-09T20:20:36.602 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2441678391' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:20:36.602 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:36 vm05 ceph-mon[61345]: pgmap v116: 132 pgs: 124 active+clean, 8 creating+peering; 454 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 74 KiB/s rd, 5.8 KiB/s wr, 179 op/s 2026-03-09T20:20:36.602 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:36 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:36.602 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:36 vm05 ceph-mon[61345]: osdmap e58: 8 total, 8 up, 8 in 2026-03-09T20:20:36.602 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2441678391' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:20:36.726 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.723+0000 7f128a4f7640 1 -- 192.168.123.105:0/3419500637 >> v1:192.168.123.105:6789/0 conn(0x7f12840772b0 legacy=0x7f1284079750 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:36.726 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.723+0000 7f128a4f7640 1 -- 192.168.123.105:0/3419500637 shutdown_connections 2026-03-09T20:20:36.726 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.723+0000 7f128a4f7640 1 -- 192.168.123.105:0/3419500637 >> 192.168.123.105:0/3419500637 conn(0x7f128406e900 msgr2=0x7f128406ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:36.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.723+0000 7f128a4f7640 1 -- 192.168.123.105:0/3419500637 shutdown_connections 2026-03-09T20:20:36.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.723+0000 7f128a4f7640 1 -- 192.168.123.105:0/3419500637 wait complete. 2026-03-09T20:20:36.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.724+0000 7f128a4f7640 1 Processor -- start 2026-03-09T20:20:36.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.724+0000 7f128a4f7640 1 -- start start 2026-03-09T20:20:36.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.724+0000 7f128a4f7640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f128413a430 con 0x7f128410bad0 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.724+0000 7f128a4f7640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f128413b610 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.724+0000 7f128a4f7640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f128413c7f0 con 0x7f1284074230 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.724+0000 7f1283fff640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f1284074230 0x7f128410b1e0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:47624/0 (socket says 192.168.123.105:47624) 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.724+0000 7f1283fff640 1 -- 192.168.123.105:0/118746643 learned_addr learned my addr 192.168.123.105:0/118746643 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3005806875 0 0) 0x7f128413c7f0 con 0x7f1284074230 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f1254003620 con 0x7f1284074230 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 390895523 0 0) 0x7f128413b610 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f128413c7f0 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4021502090 0 0) 0x7f128413a430 con 0x7f128410bad0 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f128413b610 con 0x7f128410bad0 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2053648925 0 0) 0x7f128413c7f0 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f128413a430 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f127c004170 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2385029437 0 0) 0x7f128413a430 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 >> v1:192.168.123.105:6790/0 conn(0x7f1284074230 legacy=0x7f128410b1e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 >> v1:192.168.123.105:6789/0 conn(0x7f128410bad0 legacy=0x7f128410e0b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f128413d9f0 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f128a4f7640 1 -- 192.168.123.105:0/118746643 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f128413b840 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.725+0000 7f128a4f7640 1 -- 192.168.123.105:0/118746643 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f128413be70 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.727+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f127c004830 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.727+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f127c004e40 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.727+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 646619093 0 0) 0x7f127c006110 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.728+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7f127c095160 con 0x7f128407ae70 2026-03-09T20:20:36.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.728+0000 7f128a4f7640 1 -- 192.168.123.105:0/118746643 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f128410a470 con 0x7f128407ae70 2026-03-09T20:20:36.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.733+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f127c05e0d0 con 0x7f128407ae70 2026-03-09T20:20:36.742 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607571 got 51539607569 for osd.1 2026-03-09T20:20:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:36 vm09 ceph-mon[54524]: pgmap v116: 132 pgs: 124 active+clean, 8 creating+peering; 454 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 74 KiB/s rd, 5.8 KiB/s wr, 179 op/s 2026-03-09T20:20:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:36 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:36 vm09 ceph-mon[54524]: osdmap e58: 8 total, 8 up, 8 in 2026-03-09T20:20:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2441678391' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:20:36.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.853+0000 7f128a4f7640 1 -- 192.168.123.105:0/118746643 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 0} v 0) -- 0x7f128410c3f0 con 0x7f128407ae70 2026-03-09T20:20:36.856 INFO:teuthology.orchestra.run.vm05.stdout:34359738389 2026-03-09T20:20:36.856 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.853+0000 7f12817fa640 1 -- 192.168.123.105:0/118746643 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 0}]=0 v0) ==== 74+0+12 (unknown 574334944 0 4168920183) 0x7f127c061d80 con 0x7f128407ae70 2026-03-09T20:20:36.859 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.857+0000 7f128a4f7640 1 -- 192.168.123.105:0/118746643 >> v1:192.168.123.105:6800/3290461294 conn(0x7f1254078600 legacy=0x7f125407aac0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:36.859 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.860+0000 7f128a4f7640 1 -- 192.168.123.105:0/118746643 >> v1:192.168.123.109:6789/0 conn(0x7f128407ae70 legacy=0x7f128410d940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:36.859 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.860+0000 7f128a4f7640 1 -- 192.168.123.105:0/118746643 shutdown_connections 2026-03-09T20:20:36.859 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.860+0000 7f128a4f7640 1 -- 192.168.123.105:0/118746643 >> 192.168.123.105:0/118746643 conn(0x7f128406e900 msgr2=0x7f128407e330 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:36.860 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.860+0000 7f128a4f7640 1 -- 192.168.123.105:0/118746643 shutdown_connections 2026-03-09T20:20:36.860 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:36.860+0000 7f128a4f7640 1 -- 192.168.123.105:0/118746643 wait complete. 2026-03-09T20:20:37.035 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:37.059 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738389 got 34359738389 for osd.0 2026-03-09T20:20:37.059 DEBUG:teuthology.parallel:result is None 2026-03-09T20:20:37.087 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:37.126 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:37.146 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:37.154 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:37.293 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:37.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.448+0000 7fec7a4ae640 1 -- 192.168.123.105:0/2658613520 >> v1:192.168.123.105:6789/0 conn(0x7fec74074230 legacy=0x7fec74074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.448+0000 7fec7a4ae640 1 -- 192.168.123.105:0/2658613520 shutdown_connections 2026-03-09T20:20:37.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.448+0000 7fec7a4ae640 1 -- 192.168.123.105:0/2658613520 >> 192.168.123.105:0/2658613520 conn(0x7fec7406e900 msgr2=0x7fec7406ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:37.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.448+0000 7fec7a4ae640 1 -- 192.168.123.105:0/2658613520 shutdown_connections 2026-03-09T20:20:37.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.449+0000 7fec7a4ae640 1 -- 192.168.123.105:0/2658613520 wait complete. 2026-03-09T20:20:37.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.449+0000 7fec7a4ae640 1 Processor -- start 2026-03-09T20:20:37.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.449+0000 7fec7a4ae640 1 -- start start 2026-03-09T20:20:37.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.449+0000 7fec7a4ae640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fec7410e820 con 0x7fec740772b0 2026-03-09T20:20:37.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.449+0000 7fec7a4ae640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fec74086090 con 0x7fec7407ae70 2026-03-09T20:20:37.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.449+0000 7fec7a4ae640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fec74086280 con 0x7fec74085be0 2026-03-09T20:20:37.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.449+0000 7fec73fff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fec740772b0 0x7fec7410de70 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:50788/0 (socket says 192.168.123.105:50788) 2026-03-09T20:20:37.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.449+0000 7fec73fff640 1 -- 192.168.123.105:0/819762631 learned_addr learned my addr 192.168.123.105:0/819762631 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:37.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.452+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3458371284 0 0) 0x7fec74086090 con 0x7fec7407ae70 2026-03-09T20:20:37.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.452+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fec4c003620 con 0x7fec7407ae70 2026-03-09T20:20:37.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.452+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2883569671 0 0) 0x7fec74086280 con 0x7fec74085be0 2026-03-09T20:20:37.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.453+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fec74086090 con 0x7fec74085be0 2026-03-09T20:20:37.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.453+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2235007070 0 0) 0x7fec7410e820 con 0x7fec740772b0 2026-03-09T20:20:37.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.453+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fec74086280 con 0x7fec740772b0 2026-03-09T20:20:37.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.453+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1415847407 0 0) 0x7fec4c003620 con 0x7fec7407ae70 2026-03-09T20:20:37.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.453+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fec7410e820 con 0x7fec7407ae70 2026-03-09T20:20:37.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.453+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2575879533 0 0) 0x7fec74086090 con 0x7fec74085be0 2026-03-09T20:20:37.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.453+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fec4c003620 con 0x7fec74085be0 2026-03-09T20:20:37.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.454+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 721939232 0 0) 0x7fec74086280 con 0x7fec740772b0 2026-03-09T20:20:37.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.454+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fec74086090 con 0x7fec740772b0 2026-03-09T20:20:37.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.454+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fec64003030 con 0x7fec7407ae70 2026-03-09T20:20:37.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.454+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fec68002f70 con 0x7fec74085be0 2026-03-09T20:20:37.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.454+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fec6c003530 con 0x7fec740772b0 2026-03-09T20:20:37.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.454+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3010174169 0 0) 0x7fec7410e820 con 0x7fec7407ae70 2026-03-09T20:20:37.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.454+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 >> v1:192.168.123.105:6790/0 conn(0x7fec74085be0 legacy=0x7fec741bffe0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.455+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 >> v1:192.168.123.105:6789/0 conn(0x7fec740772b0 legacy=0x7fec7410de70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.456+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fec741c4730 con 0x7fec7407ae70 2026-03-09T20:20:37.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.456+0000 7fec7a4ae640 1 -- 192.168.123.105:0/819762631 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fec741c1760 con 0x7fec7407ae70 2026-03-09T20:20:37.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.456+0000 7fec7a4ae640 1 -- 192.168.123.105:0/819762631 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fec741c3f00 con 0x7fec7407ae70 2026-03-09T20:20:37.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.456+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fec64003b60 con 0x7fec7407ae70 2026-03-09T20:20:37.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.458+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fec640048b0 con 0x7fec7407ae70 2026-03-09T20:20:37.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.459+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 646619093 0 0) 0x7fec64005b80 con 0x7fec7407ae70 2026-03-09T20:20:37.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.461+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7fec64094b70 con 0x7fec7407ae70 2026-03-09T20:20:37.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.461+0000 7fec7a4ae640 1 -- 192.168.123.105:0/819762631 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fec38005180 con 0x7fec7407ae70 2026-03-09T20:20:37.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.464+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fec6405dae0 con 0x7fec7407ae70 2026-03-09T20:20:37.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.518+0000 7fe478a44640 1 -- 192.168.123.105:0/2991764867 >> v1:192.168.123.105:6790/0 conn(0x7fe474074230 legacy=0x7fe474074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.520+0000 7fe478a44640 1 -- 192.168.123.105:0/2991764867 shutdown_connections 2026-03-09T20:20:37.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.520+0000 7fe478a44640 1 -- 192.168.123.105:0/2991764867 >> 192.168.123.105:0/2991764867 conn(0x7fe47406e900 msgr2=0x7fe47406ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:37.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.520+0000 7fe478a44640 1 -- 192.168.123.105:0/2991764867 shutdown_connections 2026-03-09T20:20:37.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.520+0000 7fe478a44640 1 -- 192.168.123.105:0/2991764867 wait complete. 2026-03-09T20:20:37.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.520+0000 7fe478a44640 1 Processor -- start 2026-03-09T20:20:37.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.520+0000 7fe478a44640 1 -- start start 2026-03-09T20:20:37.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.520+0000 7fe478a44640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe4741c1070 con 0x7fe47410ba00 2026-03-09T20:20:37.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.520+0000 7fe478a44640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe4741c2270 con 0x7fe47407ae70 2026-03-09T20:20:37.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.520+0000 7fe478a44640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe4741c3470 con 0x7fe4740772b0 2026-03-09T20:20:37.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.521+0000 7fe472575640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7fe4740772b0 0x7fe47410aff0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:47668/0 (socket says 192.168.123.105:47668) 2026-03-09T20:20:37.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.521+0000 7fe472575640 1 -- 192.168.123.105:0/1610838409 learned_addr learned my addr 192.168.123.105:0/1610838409 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:37.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.522+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1047781099 0 0) 0x7fe4741c2270 con 0x7fe47407ae70 2026-03-09T20:20:37.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.523+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe450003620 con 0x7fe47407ae70 2026-03-09T20:20:37.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.523+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1486552609 0 0) 0x7fe4741c1070 con 0x7fe47410ba00 2026-03-09T20:20:37.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.523+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe4741c2270 con 0x7fe47410ba00 2026-03-09T20:20:37.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.523+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2655742548 0 0) 0x7fe4741c3470 con 0x7fe4740772b0 2026-03-09T20:20:37.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.524+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe4741c1070 con 0x7fe4740772b0 2026-03-09T20:20:37.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.524+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2952003670 0 0) 0x7fe450003620 con 0x7fe47407ae70 2026-03-09T20:20:37.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.524+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fe4741c3470 con 0x7fe47407ae70 2026-03-09T20:20:37.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.524+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1391441100 0 0) 0x7fe4741c2270 con 0x7fe47410ba00 2026-03-09T20:20:37.524 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.525+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fe450003620 con 0x7fe47410ba00 2026-03-09T20:20:37.524 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.525+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4259389510 0 0) 0x7fe4741c1070 con 0x7fe4740772b0 2026-03-09T20:20:37.524 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.525+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fe4741c2270 con 0x7fe4740772b0 2026-03-09T20:20:37.524 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.525+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fe464003050 con 0x7fe47407ae70 2026-03-09T20:20:37.525 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.525+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fe468002fb0 con 0x7fe47410ba00 2026-03-09T20:20:37.525 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.526+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fe46c003400 con 0x7fe4740772b0 2026-03-09T20:20:37.525 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.526+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3458341970 0 0) 0x7fe4741c3470 con 0x7fe47407ae70 2026-03-09T20:20:37.525 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.526+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 >> v1:192.168.123.105:6790/0 conn(0x7fe4740772b0 legacy=0x7fe47410aff0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.526 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.526+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 >> v1:192.168.123.105:6789/0 conn(0x7fe47410ba00 legacy=0x7fe4741bf770 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.526 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.527+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe4741c4670 con 0x7fe47407ae70 2026-03-09T20:20:37.527 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.527+0000 7fe478a44640 1 -- 192.168.123.105:0/1610838409 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fe4741c24a0 con 0x7fe47407ae70 2026-03-09T20:20:37.530 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.527+0000 7fe478a44640 1 -- 192.168.123.105:0/1610838409 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fe4741c29b0 con 0x7fe47407ae70 2026-03-09T20:20:37.530 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.531+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fe464003930 con 0x7fe47407ae70 2026-03-09T20:20:37.531 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.531+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fe464004a60 con 0x7fe47407ae70 2026-03-09T20:20:37.531 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.532+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 646619093 0 0) 0x7fe464004cc0 con 0x7fe47407ae70 2026-03-09T20:20:37.533 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.533+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7fe464094c60 con 0x7fe47407ae70 2026-03-09T20:20:37.533 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.534+0000 7fe478a44640 1 -- 192.168.123.105:0/1610838409 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe440005180 con 0x7fe47407ae70 2026-03-09T20:20:37.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.537+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fe46405dbd0 con 0x7fe47407ae70 2026-03-09T20:20:37.575 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.575+0000 7f8165ddb640 1 -- 192.168.123.105:0/3757022479 >> v1:192.168.123.105:6790/0 conn(0x7f816011e280 legacy=0x7f8160120670 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.578 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.579+0000 7f8165ddb640 1 -- 192.168.123.105:0/3757022479 shutdown_connections 2026-03-09T20:20:37.578 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.579+0000 7f8165ddb640 1 -- 192.168.123.105:0/3757022479 >> 192.168.123.105:0/3757022479 conn(0x7f816006d560 msgr2=0x7f816006d970 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:37.578 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.579+0000 7f8165ddb640 1 -- 192.168.123.105:0/3757022479 shutdown_connections 2026-03-09T20:20:37.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.580+0000 7f8165ddb640 1 -- 192.168.123.105:0/3757022479 wait complete. 2026-03-09T20:20:37.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.580+0000 7f8165ddb640 1 Processor -- start 2026-03-09T20:20:37.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.580+0000 7f8165ddb640 1 -- start start 2026-03-09T20:20:37.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.580+0000 7f8165ddb640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f816007e2d0 con 0x7f816011a770 2026-03-09T20:20:37.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.580+0000 7f8165ddb640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f816007f4d0 con 0x7f8160074040 2026-03-09T20:20:37.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.580+0000 7f8165ddb640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f81600806d0 con 0x7f816011e280 2026-03-09T20:20:37.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.580+0000 7f81655da640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f816011e280 0x7f816007c9f0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:47684/0 (socket says 192.168.123.105:47684) 2026-03-09T20:20:37.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.580+0000 7f81655da640 1 -- 192.168.123.105:0/965380237 learned_addr learned my addr 192.168.123.105:0/965380237 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:37.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.581+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1574115001 0 0) 0x7f81600806d0 con 0x7f816011e280 2026-03-09T20:20:37.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.581+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8144003620 con 0x7f816011e280 2026-03-09T20:20:37.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.581+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4233749105 0 0) 0x7f816007e2d0 con 0x7f816011a770 2026-03-09T20:20:37.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.581+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f81600806d0 con 0x7f816011a770 2026-03-09T20:20:37.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.581+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 899728550 0 0) 0x7f816007f4d0 con 0x7f8160074040 2026-03-09T20:20:37.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.581+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f816007e2d0 con 0x7f8160074040 2026-03-09T20:20:37.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.581+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1659873171 0 0) 0x7f8144003620 con 0x7f816011e280 2026-03-09T20:20:37.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.581+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f816007f4d0 con 0x7f816011e280 2026-03-09T20:20:37.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.582+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f8158004150 con 0x7f816011e280 2026-03-09T20:20:37.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.582+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3093510371 0 0) 0x7f816007e2d0 con 0x7f8160074040 2026-03-09T20:20:37.582 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.582+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f8144003620 con 0x7f8160074040 2026-03-09T20:20:37.582 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.582+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2667301210 0 0) 0x7f81600806d0 con 0x7f816011a770 2026-03-09T20:20:37.583 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.582+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f816007e2d0 con 0x7f816011a770 2026-03-09T20:20:37.584 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.582+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2976892801 0 0) 0x7f816007f4d0 con 0x7f816011e280 2026-03-09T20:20:37.584 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.582+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 >> v1:192.168.123.109:6789/0 conn(0x7f8160074040 legacy=0x7f816010f120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.585 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.582+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 >> v1:192.168.123.105:6789/0 conn(0x7f816011a770 legacy=0x7f81600792c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.585 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.582+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f81600818d0 con 0x7f816011e280 2026-03-09T20:20:37.585 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.582+0000 7f8165ddb640 1 -- 192.168.123.105:0/965380237 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f816007f700 con 0x7f816011e280 2026-03-09T20:20:37.585 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.582+0000 7f8165ddb640 1 -- 192.168.123.105:0/965380237 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f816007fc60 con 0x7f816011e280 2026-03-09T20:20:37.585 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.583+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f8158003750 con 0x7f816011e280 2026-03-09T20:20:37.585 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.583+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f8158005030 con 0x7f816011e280 2026-03-09T20:20:37.585 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.584+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 646619093 0 0) 0x7f8158003300 con 0x7f816011e280 2026-03-09T20:20:37.585 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.585+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7f8158095310 con 0x7f816011e280 2026-03-09T20:20:37.592 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.592+0000 7f8165ddb640 1 -- 192.168.123.105:0/965380237 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8128005180 con 0x7f816011e280 2026-03-09T20:20:37.598 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.598+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f815805e280 con 0x7f816011e280 2026-03-09T20:20:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:37 vm05 ceph-mon[51870]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T20:20:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:37 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/118746643' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T20:20:37.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:37 vm05 ceph-mon[61345]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T20:20:37.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:37 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/118746643' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T20:20:37.665 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.666+0000 7f2c3002c640 1 -- 192.168.123.105:0/507700049 >> v1:192.168.123.105:6789/0 conn(0x7f2c200a5510 legacy=0x7f2c200b85f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.666+0000 7f9f225a0640 1 -- 192.168.123.105:0/4006273147 >> v1:192.168.123.105:6789/0 conn(0x7f9f1c07ae70 legacy=0x7f9f1c07d330 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.666+0000 7f9f225a0640 1 -- 192.168.123.105:0/4006273147 shutdown_connections 2026-03-09T20:20:37.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.667+0000 7f9f225a0640 1 -- 192.168.123.105:0/4006273147 >> 192.168.123.105:0/4006273147 conn(0x7f9f1c06e900 msgr2=0x7f9f1c06ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:37.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.666+0000 7f2c3002c640 1 -- 192.168.123.105:0/507700049 shutdown_connections 2026-03-09T20:20:37.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.666+0000 7f2c3002c640 1 -- 192.168.123.105:0/507700049 >> 192.168.123.105:0/507700049 conn(0x7f2c2001a430 msgr2=0x7f2c2001a840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:37.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.666+0000 7f2c3002c640 1 -- 192.168.123.105:0/507700049 shutdown_connections 2026-03-09T20:20:37.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.667+0000 7f2c3002c640 1 -- 192.168.123.105:0/507700049 wait complete. 2026-03-09T20:20:37.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.667+0000 7f2c3002c640 1 Processor -- start 2026-03-09T20:20:37.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.667+0000 7f2c3002c640 1 -- start start 2026-03-09T20:20:37.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.668+0000 7f2c3002c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f2c200b9630 con 0x7f2c200b9d10 2026-03-09T20:20:37.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.668+0000 7f2c3002c640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f2c200b9800 con 0x7f2c200a4a20 2026-03-09T20:20:37.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.668+0000 7f2c3002c640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f2c200b99d0 con 0x7f2c200a5510 2026-03-09T20:20:37.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.668+0000 7f2c2e5a2640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f2c200b9d10 0x7f2c200b8f20 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:50852/0 (socket says 192.168.123.105:50852) 2026-03-09T20:20:37.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.668+0000 7f2c2e5a2640 1 -- 192.168.123.105:0/2893678035 learned_addr learned my addr 192.168.123.105:0/2893678035 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.667+0000 7f9f225a0640 1 -- 192.168.123.105:0/4006273147 shutdown_connections 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.667+0000 7f9f225a0640 1 -- 192.168.123.105:0/4006273147 wait complete. 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.667+0000 7f9f225a0640 1 Processor -- start 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.667+0000 7f9f225a0640 1 -- start start 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.667+0000 7f9f225a0640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9f1c1c0ff0 con 0x7f9f1c074230 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.667+0000 7f9f225a0640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9f1c1c21d0 con 0x7f9f1c0772b0 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.667+0000 7f9f225a0640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9f1c1c33f0 con 0x7f9f1c07b2b0 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.667+0000 7f9f1b7fe640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f9f1c0772b0 0x7f9f1c079760 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:46282/0 (socket says 192.168.123.105:46282) 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.667+0000 7f9f1b7fe640 1 -- 192.168.123.105:0/1660925591 learned_addr learned my addr 192.168.123.105:0/1660925591 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.669+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1015446044 0 0) 0x7f9f1c1c0ff0 con 0x7f9f1c074230 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.669+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9eec003620 con 0x7f9f1c074230 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.669+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3635984388 0 0) 0x7f2c200b9630 con 0x7f2c200b9d10 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.669+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 373012925 0 0) 0x7f9f1c1c21d0 con 0x7f9f1c0772b0 2026-03-09T20:20:37.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.669+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9f1c1c0ff0 con 0x7f9f1c0772b0 2026-03-09T20:20:37.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.669+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1226714393 0 0) 0x7f9f1c1c33f0 con 0x7f9f1c07b2b0 2026-03-09T20:20:37.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.669+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9f1c1c21d0 con 0x7f9f1c07b2b0 2026-03-09T20:20:37.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.670+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2289031842 0 0) 0x7f9eec003620 con 0x7f9f1c074230 2026-03-09T20:20:37.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.669+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2bf0003620 con 0x7f2c200b9d10 2026-03-09T20:20:37.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.669+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 559855232 0 0) 0x7f2c200b9800 con 0x7f2c200a4a20 2026-03-09T20:20:37.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.669+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2c200b9630 con 0x7f2c200a4a20 2026-03-09T20:20:37.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.669+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 857246009 0 0) 0x7f2c200b99d0 con 0x7f2c200a5510 2026-03-09T20:20:37.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.670+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f9f1c1c33f0 con 0x7f9f1c074230 2026-03-09T20:20:37.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.670+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1177878674 0 0) 0x7f9f1c1c0ff0 con 0x7f9f1c0772b0 2026-03-09T20:20:37.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.670+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f9eec003620 con 0x7f9f1c0772b0 2026-03-09T20:20:37.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.670+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1964858249 0 0) 0x7f9f1c1c21d0 con 0x7f9f1c07b2b0 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.669+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2c200b9800 con 0x7f2c200a5510 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.670+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2946693343 0 0) 0x7f2bf0003620 con 0x7f2c200b9d10 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.670+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f2c200b99d0 con 0x7f2c200b9d10 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.670+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f2c240034d0 con 0x7f2c200b9d10 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.670+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f9f1c1c0ff0 con 0x7f9f1c07b2b0 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.671+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f9f0c002f90 con 0x7f9f1c074230 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.670+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2108710328 0 0) 0x7f2c200b9800 con 0x7f2c200a5510 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.670+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f2bf0003620 con 0x7f2c200a5510 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.670+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1142922544 0 0) 0x7f2c200b99d0 con 0x7f2c200b9d10 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.670+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 >> v1:192.168.123.105:6790/0 conn(0x7f2c200a5510 legacy=0x7f2c200b8810 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.671+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 >> v1:192.168.123.109:6789/0 conn(0x7f2c200a4a20 legacy=0x7f2c200b2af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.671+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f9f10003120 con 0x7f9f1c0772b0 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.671+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2c20160290 con 0x7f2c200b9d10 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.671+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f9f14003490 con 0x7f9f1c07b2b0 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.671+0000 7f2c3002c640 1 -- 192.168.123.105:0/2893678035 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f2c2015d260 con 0x7f2c200b9d10 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.671+0000 7f2c3002c640 1 -- 192.168.123.105:0/2893678035 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f2c2015d770 con 0x7f2c200b9d10 2026-03-09T20:20:37.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.671+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2393734482 0 0) 0x7f9f1c1c33f0 con 0x7f9f1c074230 2026-03-09T20:20:37.671 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.671+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f2c24003e30 con 0x7f2c200b9d10 2026-03-09T20:20:37.671 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.672+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f2c24005ee0 con 0x7f2c200b9d10 2026-03-09T20:20:37.672 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.673+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 >> v1:192.168.123.105:6790/0 conn(0x7f9f1c07b2b0 legacy=0x7f9f1c079e70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.672 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.673+0000 7f2c3002c640 1 -- 192.168.123.105:0/2893678035 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2c200a5e10 con 0x7f2c200b9d10 2026-03-09T20:20:37.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.673+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 >> v1:192.168.123.109:6789/0 conn(0x7f9f1c0772b0 legacy=0x7f9f1c079760 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.673+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9f1c1c45f0 con 0x7f9f1c074230 2026-03-09T20:20:37.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.673+0000 7f9f225a0640 1 -- 192.168.123.105:0/1660925591 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f9f1c1c1220 con 0x7f9f1c074230 2026-03-09T20:20:37.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.673+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 646619093 0 0) 0x7f2c24005ee0 con 0x7f2c200b9d10 2026-03-09T20:20:37.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.673+0000 7f9f225a0640 1 -- 192.168.123.105:0/1660925591 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f9f1c1c1760 con 0x7f9f1c074230 2026-03-09T20:20:37.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.676+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f9f0c003130 con 0x7f9f1c074230 2026-03-09T20:20:37.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.676+0000 7f9efaffd640 1 -- 192.168.123.105:0/1660925591 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9ef0005180 con 0x7f9f1c074230 2026-03-09T20:20:37.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.676+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f9f0c0049c0 con 0x7f9f1c074230 2026-03-09T20:20:37.677 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.678+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 646619093 0 0) 0x7f9f0c01d260 con 0x7f9f1c074230 2026-03-09T20:20:37.679 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.680+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7f2c24095180 con 0x7f2c200b9d10 2026-03-09T20:20:37.679 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.680+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7f9f0c094ba0 con 0x7f9f1c074230 2026-03-09T20:20:37.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.680+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f9f0c003710 con 0x7f9f1c074230 2026-03-09T20:20:37.683 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.684+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f2c240975a0 con 0x7f2c200b9d10 2026-03-09T20:20:37.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.705+0000 7f8e237a0640 1 -- 192.168.123.105:0/3430097453 >> v1:192.168.123.105:6790/0 conn(0x7f8e1c074230 legacy=0x7f8e1c074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.705+0000 7f8e237a0640 1 -- 192.168.123.105:0/3430097453 shutdown_connections 2026-03-09T20:20:37.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.705+0000 7f8e237a0640 1 -- 192.168.123.105:0/3430097453 >> 192.168.123.105:0/3430097453 conn(0x7f8e1c06e900 msgr2=0x7f8e1c06ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.706+0000 7f8e237a0640 1 -- 192.168.123.105:0/3430097453 shutdown_connections 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.706+0000 7f8e237a0640 1 -- 192.168.123.105:0/3430097453 wait complete. 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.706+0000 7f8e237a0640 1 Processor -- start 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.706+0000 7f8e237a0640 1 -- start start 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.706+0000 7f8e237a0640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8e1c1c1080 con 0x7f8e1c07ae70 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.706+0000 7f8e237a0640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8e1c1c2260 con 0x7f8e1c0772b0 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.706+0000 7f8e237a0640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8e1c1c3440 con 0x7f8e1c10bb20 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.706+0000 7f8e20d14640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f8e1c07ae70 0x7f8e1c10d940 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:50878/0 (socket says 192.168.123.105:50878) 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.706+0000 7f8e20d14640 1 -- 192.168.123.105:0/1290199391 learned_addr learned my addr 192.168.123.105:0/1290199391 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.715+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4003448230 0 0) 0x7f8e1c1c1080 con 0x7f8e1c07ae70 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.715+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8df0003620 con 0x7f8e1c07ae70 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.715+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3370654717 0 0) 0x7f8e1c1c3440 con 0x7f8e1c10bb20 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.715+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8e1c1c1080 con 0x7f8e1c10bb20 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.716+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1660992405 0 0) 0x7f8e1c1c2260 con 0x7f8e1c0772b0 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.716+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8e1c1c3440 con 0x7f8e1c0772b0 2026-03-09T20:20:37.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.716+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1720338441 0 0) 0x7f8df0003620 con 0x7f8e1c07ae70 2026-03-09T20:20:37.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.716+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f8e1c1c2260 con 0x7f8e1c07ae70 2026-03-09T20:20:37.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.716+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 312682201 0 0) 0x7f8e1c1c1080 con 0x7f8e1c10bb20 2026-03-09T20:20:37.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.716+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f8df0003620 con 0x7f8e1c10bb20 2026-03-09T20:20:37.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.716+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f8e180031c0 con 0x7f8e1c07ae70 2026-03-09T20:20:37.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.716+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f8e0c002ef0 con 0x7f8e1c10bb20 2026-03-09T20:20:37.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.716+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 966002377 0 0) 0x7f8e1c1c3440 con 0x7f8e1c0772b0 2026-03-09T20:20:37.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.716+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f8e1c1c1080 con 0x7f8e1c0772b0 2026-03-09T20:20:37.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.716+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2763438901 0 0) 0x7f8e1c1c2260 con 0x7f8e1c07ae70 2026-03-09T20:20:37.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.716+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 >> v1:192.168.123.105:6790/0 conn(0x7f8e1c10bb20 legacy=0x7f8e1c10e0b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.716+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 >> v1:192.168.123.109:6789/0 conn(0x7f8e1c0772b0 legacy=0x7f8e1c10b230 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.717+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8e1c1c4640 con 0x7f8e1c07ae70 2026-03-09T20:20:37.719 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.717+0000 7f8e237a0640 1 -- 192.168.123.105:0/1290199391 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f8e1c1c12b0 con 0x7f8e1c07ae70 2026-03-09T20:20:37.719 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.719+0000 7f8e237a0640 1 -- 192.168.123.105:0/1290199391 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f8e1c1c17f0 con 0x7f8e1c07ae70 2026-03-09T20:20:37.719 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.719+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f8e18003500 con 0x7f8e1c07ae70 2026-03-09T20:20:37.719 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.719+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f8e18005c60 con 0x7f8e1c07ae70 2026-03-09T20:20:37.720 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.721+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 646619093 0 0) 0x7f8e18006f30 con 0x7f8e1c07ae70 2026-03-09T20:20:37.721 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.722+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7f8e18095eb0 con 0x7f8e1c07ae70 2026-03-09T20:20:37.721 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.722+0000 7f8deffff640 1 -- 192.168.123.105:0/1290199391 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8de4005180 con 0x7f8e1c07ae70 2026-03-09T20:20:37.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.727+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f8e1805ee20 con 0x7f8e1c07ae70 2026-03-09T20:20:37.743 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.1 2026-03-09T20:20:37.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.765+0000 7fec7a4ae640 1 -- 192.168.123.105:0/819762631 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 5} v 0) -- 0x7fec38005470 con 0x7fec7407ae70 2026-03-09T20:20:37.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.773+0000 7fec717fa640 1 -- 192.168.123.105:0/819762631 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 5}]=0 v0) ==== 74+0+13 (unknown 2131975755 0 2474467777) 0x7fec64061790 con 0x7fec7407ae70 2026-03-09T20:20:37.772 INFO:teuthology.orchestra.run.vm05.stdout:154618822664 2026-03-09T20:20:37.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:37 vm09 ceph-mon[54524]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T20:20:37.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:37 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/118746643' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T20:20:37.801 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.796+0000 7fec52ffd640 1 -- 192.168.123.105:0/819762631 >> v1:192.168.123.105:6800/3290461294 conn(0x7fec4c0789a0 legacy=0x7fec4c07ae60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.801 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.796+0000 7fec52ffd640 1 -- 192.168.123.105:0/819762631 >> v1:192.168.123.109:6789/0 conn(0x7fec7407ae70 legacy=0x7fec740854d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.804 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.804+0000 7fec52ffd640 1 -- 192.168.123.105:0/819762631 shutdown_connections 2026-03-09T20:20:37.804 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.804+0000 7fec52ffd640 1 -- 192.168.123.105:0/819762631 >> 192.168.123.105:0/819762631 conn(0x7fec7406e900 msgr2=0x7fec7410f2f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:37.804 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.805+0000 7fec52ffd640 1 -- 192.168.123.105:0/819762631 shutdown_connections 2026-03-09T20:20:37.807 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.805+0000 7fec52ffd640 1 -- 192.168.123.105:0/819762631 wait complete. 2026-03-09T20:20:37.938 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.934+0000 7fe478a44640 1 -- 192.168.123.105:0/1610838409 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 6} v 0) -- 0x7fe440005470 con 0x7fe47407ae70 2026-03-09T20:20:37.940 INFO:teuthology.orchestra.run.vm05.stdout:176093659142 2026-03-09T20:20:37.940 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.939+0000 7fe4637fe640 1 -- 192.168.123.105:0/1610838409 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 6}]=0 v0) ==== 74+0+13 (unknown 1274345170 0 1153845477) 0x7fe464061880 con 0x7fe47407ae70 2026-03-09T20:20:37.956 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.953+0000 7fe478a44640 1 -- 192.168.123.105:0/1610838409 >> v1:192.168.123.105:6800/3290461294 conn(0x7fe450078a60 legacy=0x7fe45007af20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.956 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.956+0000 7fe478a44640 1 -- 192.168.123.105:0/1610838409 >> v1:192.168.123.109:6789/0 conn(0x7fe47407ae70 legacy=0x7fe4741bbfc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:37.956 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.956+0000 7fe478a44640 1 -- 192.168.123.105:0/1610838409 shutdown_connections 2026-03-09T20:20:37.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.956+0000 7fe478a44640 1 -- 192.168.123.105:0/1610838409 >> 192.168.123.105:0/1610838409 conn(0x7fe47406e900 msgr2=0x7fe47410f2f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:37.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.958+0000 7fe478a44640 1 -- 192.168.123.105:0/1610838409 shutdown_connections 2026-03-09T20:20:37.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:37.958+0000 7fe478a44640 1 -- 192.168.123.105:0/1610838409 wait complete. 2026-03-09T20:20:38.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.026+0000 7f9efaffd640 1 -- 192.168.123.105:0/1660925591 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 3} v 0) -- 0x7f9ef0005470 con 0x7f9f1c074230 2026-03-09T20:20:38.028 INFO:teuthology.orchestra.run.vm05.stdout:107374182414 2026-03-09T20:20:38.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.027+0000 7f9f197fa640 1 -- 192.168.123.105:0/1660925591 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 3}]=0 v0) ==== 74+0+13 (unknown 383520633 0 902116843) 0x7f9f0c05db10 con 0x7f9f1c074230 2026-03-09T20:20:38.037 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.034+0000 7f9f225a0640 1 -- 192.168.123.105:0/1660925591 >> v1:192.168.123.105:6800/3290461294 conn(0x7f9eec078750 legacy=0x7f9eec07ac10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:38.038 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.038+0000 7f9f225a0640 1 -- 192.168.123.105:0/1660925591 >> v1:192.168.123.105:6789/0 conn(0x7f9f1c074230 legacy=0x7f9f1c07a9f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:38.041 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.038+0000 7f9f225a0640 1 -- 192.168.123.105:0/1660925591 shutdown_connections 2026-03-09T20:20:38.041 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.038+0000 7f9f225a0640 1 -- 192.168.123.105:0/1660925591 >> 192.168.123.105:0/1660925591 conn(0x7f9f1c06e900 msgr2=0x7f9f1c072f70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:38.041 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.038+0000 7f9f225a0640 1 -- 192.168.123.105:0/1660925591 shutdown_connections 2026-03-09T20:20:38.041 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.039+0000 7f9f225a0640 1 -- 192.168.123.105:0/1660925591 wait complete. 2026-03-09T20:20:38.084 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:38.106 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.106+0000 7f8165ddb640 1 -- 192.168.123.105:0/965380237 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 4} v 0) -- 0x7f8128005470 con 0x7f816011e280 2026-03-09T20:20:38.120 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.119+0000 7f815dffb640 1 -- 192.168.123.105:0/965380237 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 4}]=0 v0) ==== 74+0+13 (unknown 1823589948 0 2255445190) 0x7f8158061f30 con 0x7f816011e280 2026-03-09T20:20:38.121 INFO:teuthology.orchestra.run.vm05.stdout:128849018891 2026-03-09T20:20:38.122 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.121+0000 7f8deffff640 1 -- 192.168.123.105:0/1290199391 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 7} v 0) -- 0x7f8de4005470 con 0x7f8e1c07ae70 2026-03-09T20:20:38.123 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.124+0000 7f8e127fc640 1 -- 192.168.123.105:0/1290199391 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 7}]=0 v0) ==== 74+0+13 (unknown 1482059429 0 1478297535) 0x7f8e18062ad0 con 0x7f8e1c07ae70 2026-03-09T20:20:38.128 INFO:teuthology.orchestra.run.vm05.stdout:197568495619 2026-03-09T20:20:38.133 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.132+0000 7f8e237a0640 1 -- 192.168.123.105:0/1290199391 >> v1:192.168.123.105:6800/3290461294 conn(0x7f8df00787b0 legacy=0x7f8df007ac70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:38.133 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.132+0000 7f8e237a0640 1 -- 192.168.123.105:0/1290199391 >> v1:192.168.123.105:6789/0 conn(0x7f8e1c07ae70 legacy=0x7f8e1c10d940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:38.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.136+0000 7f8165ddb640 1 -- 192.168.123.105:0/965380237 >> v1:192.168.123.105:6800/3290461294 conn(0x7f8144078470 legacy=0x7f814407a930 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:38.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.136+0000 7f8165ddb640 1 -- 192.168.123.105:0/965380237 >> v1:192.168.123.105:6790/0 conn(0x7f816011e280 legacy=0x7f816007c9f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:38.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.137+0000 7f8165ddb640 1 -- 192.168.123.105:0/965380237 shutdown_connections 2026-03-09T20:20:38.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.137+0000 7f8165ddb640 1 -- 192.168.123.105:0/965380237 >> 192.168.123.105:0/965380237 conn(0x7f816006d560 msgr2=0x7f81601109b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:38.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.136+0000 7f8e237a0640 1 -- 192.168.123.105:0/1290199391 shutdown_connections 2026-03-09T20:20:38.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.136+0000 7f8e237a0640 1 -- 192.168.123.105:0/1290199391 >> 192.168.123.105:0/1290199391 conn(0x7f8e1c06e900 msgr2=0x7f8e1c10f2f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:38.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.137+0000 7f8e237a0640 1 -- 192.168.123.105:0/1290199391 shutdown_connections 2026-03-09T20:20:38.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.137+0000 7f8165ddb640 1 -- 192.168.123.105:0/965380237 shutdown_connections 2026-03-09T20:20:38.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.137+0000 7f8e237a0640 1 -- 192.168.123.105:0/1290199391 wait complete. 2026-03-09T20:20:38.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.137+0000 7f8165ddb640 1 -- 192.168.123.105:0/965380237 wait complete. 2026-03-09T20:20:38.151 INFO:tasks.cephadm.ceph_manager.ceph:need seq 154618822665 got 154618822664 for osd.5 2026-03-09T20:20:38.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.213+0000 7f2c3002c640 1 -- 192.168.123.105:0/2893678035 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 2} v 0) -- 0x7f2c200af140 con 0x7f2c200b9d10 2026-03-09T20:20:38.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.213+0000 7f2c16ffd640 1 -- 192.168.123.105:0/2893678035 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 2}]=0 v0) ==== 74+0+12 (unknown 92182286 0 1342299878) 0x7f2c2405e0f0 con 0x7f2c200b9d10 2026-03-09T20:20:38.216 INFO:teuthology.orchestra.run.vm05.stdout:73014444047 2026-03-09T20:20:38.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.221+0000 7f2c3002c640 1 -- 192.168.123.105:0/2893678035 >> v1:192.168.123.105:6800/3290461294 conn(0x7f2bf0078260 legacy=0x7f2bf007a720 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:38.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.221+0000 7f2c3002c640 1 -- 192.168.123.105:0/2893678035 >> v1:192.168.123.105:6789/0 conn(0x7f2c200b9d10 legacy=0x7f2c200b8f20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:38.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.222+0000 7f2c3002c640 1 -- 192.168.123.105:0/2893678035 shutdown_connections 2026-03-09T20:20:38.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.222+0000 7f2c3002c640 1 -- 192.168.123.105:0/2893678035 >> 192.168.123.105:0/2893678035 conn(0x7f2c2001a430 msgr2=0x7f2c200a4230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:38.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.222+0000 7f2c3002c640 1 -- 192.168.123.105:0/2893678035 shutdown_connections 2026-03-09T20:20:38.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.222+0000 7f2c3002c640 1 -- 192.168.123.105:0/2893678035 wait complete. 2026-03-09T20:20:38.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.327+0000 7f89368b9640 1 -- 192.168.123.105:0/3368877688 >> v1:192.168.123.105:6789/0 conn(0x7f893011a770 legacy=0x7f893011cb60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:38.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.327+0000 7f89368b9640 1 -- 192.168.123.105:0/3368877688 shutdown_connections 2026-03-09T20:20:38.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.327+0000 7f89368b9640 1 -- 192.168.123.105:0/3368877688 >> 192.168.123.105:0/3368877688 conn(0x7f893006d560 msgr2=0x7f893006d970 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:38.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.328+0000 7f89368b9640 1 -- 192.168.123.105:0/3368877688 shutdown_connections 2026-03-09T20:20:38.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.328+0000 7f89368b9640 1 -- 192.168.123.105:0/3368877688 wait complete. 2026-03-09T20:20:38.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.328+0000 7f89368b9640 1 Processor -- start 2026-03-09T20:20:38.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.328+0000 7f89368b9640 1 -- start start 2026-03-09T20:20:38.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.328+0000 7f89368b9640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f89301b8670 con 0x7f89301b4ad0 2026-03-09T20:20:38.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.328+0000 7f89368b9640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f89301b9850 con 0x7f893011e280 2026-03-09T20:20:38.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.328+0000 7f89368b9640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f89301baa50 con 0x7f8930074040 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.333+0000 7f89358b7640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f8930074040 0x7f893010efd0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:47766/0 (socket says 192.168.123.105:47766) 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.333+0000 7f89358b7640 1 -- 192.168.123.105:0/3042885632 learned_addr learned my addr 192.168.123.105:0/3042885632 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.333+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 367935880 0 0) 0x7f89301b9850 con 0x7f893011e280 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.333+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f890c003620 con 0x7f893011e280 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.333+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2337994460 0 0) 0x7f89301b8670 con 0x7f89301b4ad0 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.333+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f89301b9850 con 0x7f89301b4ad0 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.333+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2219189260 0 0) 0x7f89301baa50 con 0x7f8930074040 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.333+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f89301b8670 con 0x7f8930074040 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3452717715 0 0) 0x7f890c003620 con 0x7f893011e280 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f89301baa50 con 0x7f893011e280 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2325927024 0 0) 0x7f89301b8670 con 0x7f8930074040 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f890c003620 con 0x7f8930074040 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f8924003200 con 0x7f893011e280 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2342466200 0 0) 0x7f89301b9850 con 0x7f89301b4ad0 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f89301b8670 con 0x7f89301b4ad0 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f8920003070 con 0x7f8930074040 2026-03-09T20:20:38.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f892c003370 con 0x7f89301b4ad0 2026-03-09T20:20:38.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2224440681 0 0) 0x7f89301baa50 con 0x7f893011e280 2026-03-09T20:20:38.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 >> v1:192.168.123.105:6790/0 conn(0x7f8930074040 legacy=0x7f893010efd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:38.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 >> v1:192.168.123.105:6789/0 conn(0x7f89301b4ad0 legacy=0x7f89301b6ec0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:38.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f89301bbc50 con 0x7f893011e280 2026-03-09T20:20:38.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f89368b9640 1 -- 192.168.123.105:0/3042885632 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f89301bac80 con 0x7f893011e280 2026-03-09T20:20:38.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.334+0000 7f89368b9640 1 -- 192.168.123.105:0/3042885632 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f89301bb1e0 con 0x7f893011e280 2026-03-09T20:20:38.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.335+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f8924003400 con 0x7f893011e280 2026-03-09T20:20:38.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.335+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f8924004bd0 con 0x7f893011e280 2026-03-09T20:20:38.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.336+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 646619093 0 0) 0x7f892401d530 con 0x7f893011e280 2026-03-09T20:20:38.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.336+0000 7f89368b9640 1 -- 192.168.123.105:0/3042885632 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f88f8005180 con 0x7f893011e280 2026-03-09T20:20:38.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.336+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7f8924093db0 con 0x7f893011e280 2026-03-09T20:20:38.339 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.339+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f892405cd20 con 0x7f893011e280 2026-03-09T20:20:38.344 INFO:tasks.cephadm.ceph_manager.ceph:need seq 176093659143 got 176093659142 for osd.6 2026-03-09T20:20:38.360 INFO:tasks.cephadm.ceph_manager.ceph:need seq 107374182414 got 107374182414 for osd.3 2026-03-09T20:20:38.360 DEBUG:teuthology.parallel:result is None 2026-03-09T20:20:38.381 INFO:tasks.cephadm.ceph_manager.ceph:need seq 128849018892 got 128849018891 for osd.4 2026-03-09T20:20:38.382 INFO:tasks.cephadm.ceph_manager.ceph:need seq 197568495620 got 197568495619 for osd.7 2026-03-09T20:20:38.393 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444048 got 73014444047 for osd.2 2026-03-09T20:20:38.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.457+0000 7f89368b9640 1 -- 192.168.123.105:0/3042885632 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) -- 0x7f88f8005470 con 0x7f893011e280 2026-03-09T20:20:38.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.458+0000 7f891effd640 1 -- 192.168.123.105:0/3042885632 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) ==== 74+0+12 (unknown 832126871 0 958075967) 0x7f89240609d0 con 0x7f893011e280 2026-03-09T20:20:38.460 INFO:teuthology.orchestra.run.vm05.stdout:51539607571 2026-03-09T20:20:38.460 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.461+0000 7f89368b9640 1 -- 192.168.123.105:0/3042885632 >> v1:192.168.123.105:6800/3290461294 conn(0x7f890c078420 legacy=0x7f890c07a8e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:38.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.461+0000 7f89368b9640 1 -- 192.168.123.105:0/3042885632 >> v1:192.168.123.109:6789/0 conn(0x7f893011e280 legacy=0x7f89301b33b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:38.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.461+0000 7f89368b9640 1 -- 192.168.123.105:0/3042885632 shutdown_connections 2026-03-09T20:20:38.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.461+0000 7f89368b9640 1 -- 192.168.123.105:0/3042885632 >> 192.168.123.105:0/3042885632 conn(0x7f893006d560 msgr2=0x7f89300735d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:38.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.461+0000 7f89368b9640 1 -- 192.168.123.105:0/3042885632 shutdown_connections 2026-03-09T20:20:38.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:38.462+0000 7f89368b9640 1 -- 192.168.123.105:0/3042885632 wait complete. 2026-03-09T20:20:38.610 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607571 got 51539607571 for osd.1 2026-03-09T20:20:38.610 DEBUG:teuthology.parallel:result is None 2026-03-09T20:20:38.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[61345]: pgmap v118: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 5.5 KiB/s wr, 175 op/s 2026-03-09T20:20:38.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/819762631' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T20:20:38.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1610838409' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T20:20:38.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1660925591' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T20:20:38.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/965380237' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T20:20:38.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1290199391' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T20:20:38.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2893678035' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:20:38.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3042885632' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:20:38.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[51870]: pgmap v118: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 5.5 KiB/s wr, 175 op/s 2026-03-09T20:20:38.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/819762631' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T20:20:38.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1610838409' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T20:20:38.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1660925591' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T20:20:38.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/965380237' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T20:20:38.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1290199391' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T20:20:38.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2893678035' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:20:38.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3042885632' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:20:38.783 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:38 vm09 ceph-mon[54524]: pgmap v118: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 5.5 KiB/s wr, 175 op/s 2026-03-09T20:20:38.783 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/819762631' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T20:20:38.783 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1610838409' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T20:20:38.783 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1660925591' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T20:20:38.783 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/965380237' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T20:20:38.783 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1290199391' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T20:20:38.783 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2893678035' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:20:38.784 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3042885632' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:20:39.151 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.5 2026-03-09T20:20:39.323 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:39.344 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.6 2026-03-09T20:20:39.382 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.4 2026-03-09T20:20:39.383 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.7 2026-03-09T20:20:39.394 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph osd last-stat-seq osd.2 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.480+0000 7f318150f640 1 -- 192.168.123.105:0/2606885114 >> v1:192.168.123.105:6790/0 conn(0x7f317c074040 legacy=0x7f317c074420 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.481+0000 7f318150f640 1 -- 192.168.123.105:0/2606885114 shutdown_connections 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.481+0000 7f318150f640 1 -- 192.168.123.105:0/2606885114 >> 192.168.123.105:0/2606885114 conn(0x7f317c06d560 msgr2=0x7f317c06d970 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.481+0000 7f318150f640 1 -- 192.168.123.105:0/2606885114 shutdown_connections 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.481+0000 7f318150f640 1 -- 192.168.123.105:0/2606885114 wait complete. 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.482+0000 7f318150f640 1 Processor -- start 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.482+0000 7f318150f640 1 -- start start 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.482+0000 7f318150f640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f317c1b8420 con 0x7f317c11a770 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.482+0000 7f318150f640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f317c1b9620 con 0x7f317c11e280 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.482+0000 7f318150f640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f317c1ba820 con 0x7f317c074040 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.482+0000 7f317b7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f317c11a770 0x7f317c1b33b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:50908/0 (socket says 192.168.123.105:50908) 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.482+0000 7f317b7fe640 1 -- 192.168.123.105:0/2529193243 learned_addr learned my addr 192.168.123.105:0/2529193243 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.482+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 221864837 0 0) 0x7f317c1b8420 con 0x7f317c11a770 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.482+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3154003620 con 0x7f317c11a770 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3406272305 0 0) 0x7f317c1ba820 con 0x7f317c074040 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f317c1b8420 con 0x7f317c074040 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1401175448 0 0) 0x7f317c1b9620 con 0x7f317c11e280 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f317c1ba820 con 0x7f317c11e280 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 175428156 0 0) 0x7f3154003620 con 0x7f317c11a770 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f317c1b9620 con 0x7f317c11a770 2026-03-09T20:20:39.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f31700031a0 con 0x7f317c11a770 2026-03-09T20:20:39.485 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 453619157 0 0) 0x7f317c1b8420 con 0x7f317c074040 2026-03-09T20:20:39.485 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f3154003620 con 0x7f317c074040 2026-03-09T20:20:39.485 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f3164002860 con 0x7f317c074040 2026-03-09T20:20:39.485 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 170996219 0 0) 0x7f317c1b9620 con 0x7f317c11a770 2026-03-09T20:20:39.485 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 >> v1:192.168.123.105:6790/0 conn(0x7f317c074040 legacy=0x7f317c111200 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:39.485 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 >> v1:192.168.123.109:6789/0 conn(0x7f317c11e280 legacy=0x7f317c1b6b20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:39.485 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f317c1bba20 con 0x7f317c11a770 2026-03-09T20:20:39.488 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.483+0000 7f318150f640 1 -- 192.168.123.105:0/2529193243 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f317c1baa50 con 0x7f317c11a770 2026-03-09T20:20:39.488 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.484+0000 7f318150f640 1 -- 192.168.123.105:0/2529193243 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f317c1bafb0 con 0x7f317c11a770 2026-03-09T20:20:39.488 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.484+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f3170003b60 con 0x7f317c11a770 2026-03-09T20:20:39.488 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.484+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f3170005cf0 con 0x7f317c11a770 2026-03-09T20:20:39.488 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.485+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 646619093 0 0) 0x7f317001e630 con 0x7f317c11a770 2026-03-09T20:20:39.488 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.485+0000 7f318150f640 1 -- 192.168.123.105:0/2529193243 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f313c005180 con 0x7f317c11a770 2026-03-09T20:20:39.488 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.488+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7f3170093cb0 con 0x7f317c11a770 2026-03-09T20:20:39.488 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.488+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f31700974e0 con 0x7f317c11a770 2026-03-09T20:20:39.635 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.635+0000 7f318150f640 1 -- 192.168.123.105:0/2529193243 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 5} v 0) -- 0x7f313c005470 con 0x7f317c11a770 2026-03-09T20:20:39.637 INFO:teuthology.orchestra.run.vm05.stdout:154618822666 2026-03-09T20:20:39.637 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.635+0000 7f31797fa640 1 -- 192.168.123.105:0/2529193243 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 5}]=0 v0) ==== 74+0+13 (unknown 2131975755 0 3023589679) 0x7f317005e380 con 0x7f317c11a770 2026-03-09T20:20:39.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.638+0000 7f315affd640 1 -- 192.168.123.105:0/2529193243 >> v1:192.168.123.105:6800/3290461294 conn(0x7f3154078490 legacy=0x7f315407a930 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:39.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.638+0000 7f315affd640 1 -- 192.168.123.105:0/2529193243 >> v1:192.168.123.105:6789/0 conn(0x7f317c11a770 legacy=0x7f317c1b33b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:39.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.639+0000 7f315affd640 1 -- 192.168.123.105:0/2529193243 shutdown_connections 2026-03-09T20:20:39.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.639+0000 7f315affd640 1 -- 192.168.123.105:0/2529193243 >> 192.168.123.105:0/2529193243 conn(0x7f317c06d560 msgr2=0x7f317c0726a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:39.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.639+0000 7f315affd640 1 -- 192.168.123.105:0/2529193243 shutdown_connections 2026-03-09T20:20:39.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.639+0000 7f315affd640 1 -- 192.168.123.105:0/2529193243 wait complete. 2026-03-09T20:20:39.639 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 systemd[1]: Starting Ceph prometheus.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:20:39.652 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:39.866 INFO:tasks.cephadm.ceph_manager.ceph:need seq 154618822665 got 154618822666 for osd.5 2026-03-09T20:20:39.866 DEBUG:teuthology.parallel:result is None 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.904+0000 7fc6a6429640 1 -- 192.168.123.105:0/802879110 >> v1:192.168.123.105:6790/0 conn(0x7fc6a0074230 legacy=0x7fc6a0074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.904+0000 7fc6a6429640 1 -- 192.168.123.105:0/802879110 shutdown_connections 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.904+0000 7fc6a6429640 1 -- 192.168.123.105:0/802879110 >> 192.168.123.105:0/802879110 conn(0x7fc6a006e900 msgr2=0x7fc6a006ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.904+0000 7fc6a6429640 1 -- 192.168.123.105:0/802879110 shutdown_connections 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.904+0000 7fc6a6429640 1 -- 192.168.123.105:0/802879110 wait complete. 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.906+0000 7fc6a6429640 1 Processor -- start 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.906+0000 7fc6a6429640 1 -- start start 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.906+0000 7fc6a6429640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fc6a013a430 con 0x7fc6a0086240 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.906+0000 7fc6a6429640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fc6a013b610 con 0x7fc6a00772b0 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.906+0000 7fc6a6429640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fc6a013c7f0 con 0x7fc6a007ae70 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.906+0000 7fc69f7fe640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7fc6a007ae70 0x7fc6a00834b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:57438/0 (socket says 192.168.123.105:57438) 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.906+0000 7fc69f7fe640 1 -- 192.168.123.105:0/2279724817 learned_addr learned my addr 192.168.123.105:0/2279724817 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.907+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1954889384 0 0) 0x7fc6a013c7f0 con 0x7fc6a007ae70 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.907+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fc670003620 con 0x7fc6a007ae70 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.907+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2874106059 0 0) 0x7fc6a013a430 con 0x7fc6a0086240 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.907+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fc6a013c7f0 con 0x7fc6a0086240 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.907+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2359830930 0 0) 0x7fc6a013b610 con 0x7fc6a00772b0 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.907+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fc6a013a430 con 0x7fc6a00772b0 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.908+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2444940846 0 0) 0x7fc670003620 con 0x7fc6a007ae70 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.908+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fc6a013b610 con 0x7fc6a007ae70 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.908+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3301862716 0 0) 0x7fc6a013c7f0 con 0x7fc6a0086240 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.908+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fc670003620 con 0x7fc6a0086240 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.908+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3090329786 0 0) 0x7fc6a013a430 con 0x7fc6a00772b0 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.908+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fc6a013c7f0 con 0x7fc6a00772b0 2026-03-09T20:20:39.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.908+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fc698003450 con 0x7fc6a007ae70 2026-03-09T20:20:39.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.908+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fc694002f10 con 0x7fc6a0086240 2026-03-09T20:20:39.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.908+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fc690003180 con 0x7fc6a00772b0 2026-03-09T20:20:39.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.908+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 4238896400 0 0) 0x7fc6a013b610 con 0x7fc6a007ae70 2026-03-09T20:20:39.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.909+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 >> v1:192.168.123.109:6789/0 conn(0x7fc6a00772b0 legacy=0x7fc6a0085920 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:39.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.909+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 >> v1:192.168.123.105:6789/0 conn(0x7fc6a0086240 legacy=0x7fc6a0083c00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:39.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.909+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc6a013d9d0 con 0x7fc6a007ae70 2026-03-09T20:20:39.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.909+0000 7fc6a6429640 1 -- 192.168.123.105:0/2279724817 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fc6a013b840 con 0x7fc6a007ae70 2026-03-09T20:20:39.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.909+0000 7fc6a6429640 1 -- 192.168.123.105:0/2279724817 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7fc6a013bd00 con 0x7fc6a007ae70 2026-03-09T20:20:39.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.911+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fc698003ee0 con 0x7fc6a007ae70 2026-03-09T20:20:39.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.911+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fc698005f40 con 0x7fc6a007ae70 2026-03-09T20:20:39.911 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.911+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 646619093 0 0) 0x7fc6980071f0 con 0x7fc6a007ae70 2026-03-09T20:20:39.911 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.912+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7fc6980962b0 con 0x7fc6a007ae70 2026-03-09T20:20:39.914 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.912+0000 7fc6a6429640 1 -- 192.168.123.105:0/2279724817 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fc668005180 con 0x7fc6a007ae70 2026-03-09T20:20:39.914 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:39.915+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fc69805f220 con 0x7fc6a007ae70 2026-03-09T20:20:40.008 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:40.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.016+0000 7fc6a6429640 1 -- 192.168.123.105:0/2279724817 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 6} v 0) -- 0x7fc66c0051a0 con 0x7fc6a007ae70 2026-03-09T20:20:40.018 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.018+0000 7fc69d7fa640 1 -- 192.168.123.105:0/2279724817 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 6}]=0 v0) ==== 74+0+13 (unknown 1274345170 0 755571159) 0x7fc698062ed0 con 0x7fc6a007ae70 2026-03-09T20:20:40.018 INFO:teuthology.orchestra.run.vm05.stdout:176093659144 2026-03-09T20:20:40.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.021+0000 7fc6a6429640 1 -- 192.168.123.105:0/2279724817 >> v1:192.168.123.105:6800/3290461294 conn(0x7fc670078a50 legacy=0x7fc67007af10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.021+0000 7fc6a6429640 1 -- 192.168.123.105:0/2279724817 >> v1:192.168.123.105:6790/0 conn(0x7fc6a007ae70 legacy=0x7fc6a00834b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.021+0000 7fc6a6429640 1 -- 192.168.123.105:0/2279724817 shutdown_connections 2026-03-09T20:20:40.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.021+0000 7fc6a6429640 1 -- 192.168.123.105:0/2279724817 >> 192.168.123.105:0/2279724817 conn(0x7fc6a006e900 msgr2=0x7fc6a010f2f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:40.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.021+0000 7fc6a6429640 1 -- 192.168.123.105:0/2279724817 shutdown_connections 2026-03-09T20:20:40.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.021+0000 7fc6a6429640 1 -- 192.168.123.105:0/2279724817 wait complete. 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 podman[80022]: 2026-03-09 20:20:39.639830616 +0000 UTC m=+0.019275874 container create 962c244b4fc17d64a1784ff8e1a02685520c91bd06614189f5db8790f22b8716 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 podman[80022]: 2026-03-09 20:20:39.690565957 +0000 UTC m=+0.070011215 container init 962c244b4fc17d64a1784ff8e1a02685520c91bd06614189f5db8790f22b8716 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 podman[80022]: 2026-03-09 20:20:39.695000118 +0000 UTC m=+0.074445376 container start 962c244b4fc17d64a1784ff8e1a02685520c91bd06614189f5db8790f22b8716 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 bash[80022]: 962c244b4fc17d64a1784ff8e1a02685520c91bd06614189f5db8790f22b8716 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 podman[80022]: 2026-03-09 20:20:39.631337207 +0000 UTC m=+0.010782465 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 systemd[1]: Started Ceph prometheus.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.739Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.739Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.739Z caller=main.go:623 level=info host_details="(Linux 5.14.0-686.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026 x86_64 vm09 (none))" 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.739Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.740Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.741Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.741Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.742Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.742Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.746Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.746Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.044µs 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.746Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.747Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.747Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=20.789µs wal_replay_duration=1.376087ms wbl_replay_duration=140ns total_replay_duration=1.433694ms 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.748Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.748Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.748Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.762Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=14.011027ms db_storage=922ns remote_storage=1.313µs web_handler=682ns query_engine=491ns scrape=628.126µs scrape_sd=79.609µs notify=952ns notify_sd=601ns rules=12.809979ms tracing=6.913µs 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.762Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T20:20:40.024 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:20:39 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:20:39.762Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T20:20:40.180 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:40.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.176+0000 7f6a5ffff640 1 -- 192.168.123.105:0/391779146 <== mon.2 v1:192.168.123.105:6790/0 5 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6a64004800 con 0x7f6a6810de30 2026-03-09T20:20:40.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.181+0000 7f6a6f97f640 1 -- 192.168.123.105:0/391779146 >> v1:192.168.123.105:6790/0 conn(0x7f6a6810de30 legacy=0x7f6a6810e210 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.181+0000 7f6a6f97f640 1 -- 192.168.123.105:0/391779146 shutdown_connections 2026-03-09T20:20:40.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.181+0000 7f6a6f97f640 1 -- 192.168.123.105:0/391779146 >> 192.168.123.105:0/391779146 conn(0x7f6a6806d730 msgr2=0x7f6a6806db40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:40.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.181+0000 7f6a6f97f640 1 -- 192.168.123.105:0/391779146 shutdown_connections 2026-03-09T20:20:40.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.182+0000 7f6a6f97f640 1 -- 192.168.123.105:0/391779146 wait complete. 2026-03-09T20:20:40.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.182+0000 7f6a6f97f640 1 Processor -- start 2026-03-09T20:20:40.184 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:40.187 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.184+0000 7f6a6f97f640 1 -- start start 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.185+0000 7f6a6f97f640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6a68074b10 con 0x7f6a6810de30 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.185+0000 7f6a6f97f640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6a68074ce0 con 0x7f6a6807aee0 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.185+0000 7f6a6f97f640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6a6813cc70 con 0x7f6a68077210 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.185+0000 7f6a6d6f4640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f6a68077210 0x7f6a68071aa0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:57452/0 (socket says 192.168.123.105:57452) 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.185+0000 7f6a6d6f4640 1 -- 192.168.123.105:0/567258792 learned_addr learned my addr 192.168.123.105:0/567258792 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.185+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2976690393 0 0) 0x7f6a6813cc70 con 0x7f6a68077210 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.185+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6a3c003620 con 0x7f6a68077210 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.185+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1916216778 0 0) 0x7f6a68074ce0 con 0x7f6a6807aee0 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.185+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6a6813cc70 con 0x7f6a6807aee0 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.185+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4008151293 0 0) 0x7f6a3c003620 con 0x7f6a68077210 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.185+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6a68074ce0 con 0x7f6a68077210 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.185+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6a64003280 con 0x7f6a68077210 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.186+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 311246946 0 0) 0x7f6a68074b10 con 0x7f6a6810de30 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.186+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6a3c003620 con 0x7f6a6810de30 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.186+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3971467463 0 0) 0x7f6a6813cc70 con 0x7f6a6807aee0 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.186+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6a68074b10 con 0x7f6a6807aee0 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.186+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3700939354 0 0) 0x7f6a68074ce0 con 0x7f6a68077210 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.186+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 >> v1:192.168.123.109:6789/0 conn(0x7f6a6807aee0 legacy=0x7f6a680721b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.186+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 >> v1:192.168.123.105:6789/0 conn(0x7f6a6810de30 legacy=0x7f6a68139530 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.186+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6a6813de50 con 0x7f6a68077210 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.186+0000 7f6a6f97f640 1 -- 192.168.123.105:0/567258792 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f6a6813ce40 con 0x7f6a68077210 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.186+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f6a640046c0 con 0x7f6a68077210 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.186+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6a64004fa0 con 0x7f6a68077210 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.186+0000 7f6a6f97f640 1 -- 192.168.123.105:0/567258792 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f6a6813d3a0 con 0x7f6a68077210 2026-03-09T20:20:40.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.188+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 646619093 0 0) 0x7f6a64005140 con 0x7f6a68077210 2026-03-09T20:20:40.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.188+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7f6a6405ac30 con 0x7f6a68077210 2026-03-09T20:20:40.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.188+0000 7f6a6f97f640 1 -- 192.168.123.105:0/567258792 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6a6810a680 con 0x7f6a68077210 2026-03-09T20:20:40.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.195+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f6a6405f020 con 0x7f6a68077210 2026-03-09T20:20:40.229 INFO:tasks.cephadm.ceph_manager.ceph:need seq 176093659143 got 176093659144 for osd.6 2026-03-09T20:20:40.229 DEBUG:teuthology.parallel:result is None 2026-03-09T20:20:40.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.368+0000 7f6a6f97f640 1 -- 192.168.123.105:0/567258792 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 7} v 0) -- 0x7f6a6807f9c0 con 0x7f6a68077210 2026-03-09T20:20:40.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.368+0000 7f6a5e7fc640 1 -- 192.168.123.105:0/567258792 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 7}]=0 v0) ==== 74+0+13 (unknown 1482059429 0 791086196) 0x7f6a64062cd0 con 0x7f6a68077210 2026-03-09T20:20:40.373 INFO:teuthology.orchestra.run.vm05.stdout:197568495621 2026-03-09T20:20:40.377 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.378+0000 7f6a6f97f640 1 -- 192.168.123.105:0/567258792 >> v1:192.168.123.105:6800/3290461294 conn(0x7f6a3c078520 legacy=0x7f6a3c07a9e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.377 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.378+0000 7f6a6f97f640 1 -- 192.168.123.105:0/567258792 >> v1:192.168.123.105:6790/0 conn(0x7f6a68077210 legacy=0x7f6a68071aa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.379 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.380+0000 7f6a6f97f640 1 -- 192.168.123.105:0/567258792 shutdown_connections 2026-03-09T20:20:40.379 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.380+0000 7f6a6f97f640 1 -- 192.168.123.105:0/567258792 >> 192.168.123.105:0/567258792 conn(0x7f6a6806d730 msgr2=0x7f6a6807d3a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:40.379 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.380+0000 7f6a6f97f640 1 -- 192.168.123.105:0/567258792 shutdown_connections 2026-03-09T20:20:40.379 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.380+0000 7f6a6f97f640 1 -- 192.168.123.105:0/567258792 wait complete. 2026-03-09T20:20:40.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.412+0000 7f03d2d15640 1 -- 192.168.123.105:0/138879778 >> v1:192.168.123.105:6789/0 conn(0x7f03cc07aee0 legacy=0x7f03cc07d3a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.414+0000 7f03d2d15640 1 -- 192.168.123.105:0/138879778 shutdown_connections 2026-03-09T20:20:40.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.414+0000 7f03d2d15640 1 -- 192.168.123.105:0/138879778 >> 192.168.123.105:0/138879778 conn(0x7f03cc06d730 msgr2=0x7f03cc06db40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:40.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.414+0000 7f03d2d15640 1 -- 192.168.123.105:0/138879778 shutdown_connections 2026-03-09T20:20:40.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.414+0000 7f03d2d15640 1 -- 192.168.123.105:0/138879778 wait complete. 2026-03-09T20:20:40.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.415+0000 7f03d2d15640 1 Processor -- start 2026-03-09T20:20:40.415 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.415+0000 7f03d2d15640 1 -- start start 2026-03-09T20:20:40.416 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.415+0000 7f03d2d15640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f03cc13a540 con 0x7f03cc10de30 2026-03-09T20:20:40.416 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.415+0000 7f03d2d15640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f03cc13b720 con 0x7f03cc077210 2026-03-09T20:20:40.416 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.415+0000 7f03d2d15640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f03cc13c900 con 0x7f03cc07aee0 2026-03-09T20:20:40.416 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.415+0000 7f03cbfff640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f03cc07aee0 0x7f03cc085500 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:57470/0 (socket says 192.168.123.105:57470) 2026-03-09T20:20:40.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.415+0000 7f03cbfff640 1 -- 192.168.123.105:0/1388260921 learned_addr learned my addr 192.168.123.105:0/1388260921 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:40.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.415+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 361496750 0 0) 0x7f03cc13a540 con 0x7f03cc10de30 2026-03-09T20:20:40.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.415+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f03a8003620 con 0x7f03cc10de30 2026-03-09T20:20:40.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.415+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 323032383 0 0) 0x7f03cc13c900 con 0x7f03cc07aee0 2026-03-09T20:20:40.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.415+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f03cc13a540 con 0x7f03cc07aee0 2026-03-09T20:20:40.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.416+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4015516206 0 0) 0x7f03a8003620 con 0x7f03cc10de30 2026-03-09T20:20:40.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.416+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f03cc13c900 con 0x7f03cc10de30 2026-03-09T20:20:40.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.416+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f03c4003160 con 0x7f03cc10de30 2026-03-09T20:20:40.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.416+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2845915967 0 0) 0x7f03cc13c900 con 0x7f03cc10de30 2026-03-09T20:20:40.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.416+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 >> v1:192.168.123.105:6790/0 conn(0x7f03cc07aee0 legacy=0x7f03cc085500 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.416+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 >> v1:192.168.123.109:6789/0 conn(0x7f03cc077210 legacy=0x7f03cc074710 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:20:40.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.416+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f03cc13db00 con 0x7f03cc10de30 2026-03-09T20:20:40.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.416+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f03c4003e10 con 0x7f03cc10de30 2026-03-09T20:20:40.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.417+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f03c4005160 con 0x7f03cc10de30 2026-03-09T20:20:40.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.417+0000 7f03d2d15640 1 -- 192.168.123.105:0/1388260921 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f03cc13cb30 con 0x7f03cc10de30 2026-03-09T20:20:40.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.417+0000 7f03d2d15640 1 -- 192.168.123.105:0/1388260921 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f03cc13d160 con 0x7f03cc10de30 2026-03-09T20:20:40.419 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.419+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 646619093 0 0) 0x7f03c4003fc0 con 0x7f03cc10de30 2026-03-09T20:20:40.421 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.419+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7f03c4092a30 con 0x7f03cc10de30 2026-03-09T20:20:40.421 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.419+0000 7f03d2d15640 1 -- 192.168.123.105:0/1388260921 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f03cc10a680 con 0x7f03cc10de30 2026-03-09T20:20:40.422 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.422+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f03c405d100 con 0x7f03cc10de30 2026-03-09T20:20:40.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.447+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/312586484 >> v1:192.168.123.105:6789/0 conn(0x7fd4dc0b9d20 legacy=0x7fd4dc0bc110 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.449+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/312586484 shutdown_connections 2026-03-09T20:20:40.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.449+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/312586484 >> 192.168.123.105:0/312586484 conn(0x7fd4dc01a440 msgr2=0x7fd4dc01a850 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:40.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.449+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/312586484 shutdown_connections 2026-03-09T20:20:40.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.449+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/312586484 wait complete. 2026-03-09T20:20:40.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.449+0000 7fd4e9d3b640 1 Processor -- start 2026-03-09T20:20:40.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.450+0000 7fd4e9d3b640 1 -- start start 2026-03-09T20:20:40.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.450+0000 7fd4e9d3b640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd4dc153f50 con 0x7fd4dc0b9d20 2026-03-09T20:20:40.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.450+0000 7fd4e9d3b640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd4dc155150 con 0x7fd4dc0a5520 2026-03-09T20:20:40.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.450+0000 7fd4e9d3b640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd4dc156350 con 0x7fd4dc0a4a30 2026-03-09T20:20:40.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.450+0000 7fd4e3fff640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7fd4dc0a5520 0x7fd4dc14ee80 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:56630/0 (socket says 192.168.123.105:56630) 2026-03-09T20:20:40.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.450+0000 7fd4e3fff640 1 -- 192.168.123.105:0/956536368 learned_addr learned my addr 192.168.123.105:0/956536368 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:40.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.450+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3435244094 0 0) 0x7fd4dc155150 con 0x7fd4dc0a5520 2026-03-09T20:20:40.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.450+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd4c0003620 con 0x7fd4dc0a5520 2026-03-09T20:20:40.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.451+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1530457956 0 0) 0x7fd4c0003620 con 0x7fd4dc0a5520 2026-03-09T20:20:40.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.451+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd4dc155150 con 0x7fd4dc0a5520 2026-03-09T20:20:40.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.451+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fd4d0003150 con 0x7fd4dc0a5520 2026-03-09T20:20:40.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.451+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4130755733 0 0) 0x7fd4dc156350 con 0x7fd4dc0a4a30 2026-03-09T20:20:40.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.451+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd4c0003620 con 0x7fd4dc0a4a30 2026-03-09T20:20:40.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.452+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3155427271 0 0) 0x7fd4dc155150 con 0x7fd4dc0a5520 2026-03-09T20:20:40.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.452+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 >> v1:192.168.123.105:6790/0 conn(0x7fd4dc0a4a30 legacy=0x7fd4dc0b9400 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.452+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 >> v1:192.168.123.105:6789/0 conn(0x7fd4dc0b9d20 legacy=0x7fd4dc152650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.452+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd4dc157550 con 0x7fd4dc0a5520 2026-03-09T20:20:40.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.452+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/956536368 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fd4dc156580 con 0x7fd4dc0a5520 2026-03-09T20:20:40.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.452+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/956536368 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fd4dc156b30 con 0x7fd4dc0a5520 2026-03-09T20:20:40.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.453+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fd4d0003530 con 0x7fd4dc0a5520 2026-03-09T20:20:40.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.453+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7fd4d0005dd0 con 0x7fd4dc0a5520 2026-03-09T20:20:40.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.454+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/956536368 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd4a8005180 con 0x7fd4dc0a5520 2026-03-09T20:20:40.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.455+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 646619093 0 0) 0x7fd4d00039c0 con 0x7fd4dc0a5520 2026-03-09T20:20:40.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.455+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7fd4d00951e0 con 0x7fd4dc0a5520 2026-03-09T20:20:40.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.457+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fd4d005e150 con 0x7fd4dc0a5520 2026-03-09T20:20:40.543 INFO:tasks.cephadm.ceph_manager.ceph:need seq 197568495620 got 197568495621 for osd.7 2026-03-09T20:20:40.544 DEBUG:teuthology.parallel:result is None 2026-03-09T20:20:40.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.554+0000 7f03d2d15640 1 -- 192.168.123.105:0/1388260921 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 2} v 0) -- 0x7f03cc072d80 con 0x7f03cc10de30 2026-03-09T20:20:40.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.555+0000 7f03c9ffb640 1 -- 192.168.123.105:0/1388260921 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 2}]=0 v0) ==== 74+0+12 (unknown 92182286 0 2765769836) 0x7f03c4060db0 con 0x7f03cc10de30 2026-03-09T20:20:40.554 INFO:teuthology.orchestra.run.vm05.stdout:73014444049 2026-03-09T20:20:40.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.558+0000 7f03d2d15640 1 -- 192.168.123.105:0/1388260921 >> v1:192.168.123.105:6800/3290461294 conn(0x7f03a80780a0 legacy=0x7f03a807a560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.558+0000 7f03d2d15640 1 -- 192.168.123.105:0/1388260921 >> v1:192.168.123.105:6789/0 conn(0x7f03cc10de30 legacy=0x7f03cc085c10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.559+0000 7f03d2d15640 1 -- 192.168.123.105:0/1388260921 shutdown_connections 2026-03-09T20:20:40.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.559+0000 7f03d2d15640 1 -- 192.168.123.105:0/1388260921 >> 192.168.123.105:0/1388260921 conn(0x7f03cc06d730 msgr2=0x7f03cc07e400 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:40.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.559+0000 7f03d2d15640 1 -- 192.168.123.105:0/1388260921 shutdown_connections 2026-03-09T20:20:40.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.560+0000 7f03d2d15640 1 -- 192.168.123.105:0/1388260921 wait complete. 2026-03-09T20:20:40.595 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.596+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/956536368 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 4} v 0) -- 0x7fd4a8005470 con 0x7fd4dc0a5520 2026-03-09T20:20:40.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.596+0000 7fd4e1ffb640 1 -- 192.168.123.105:0/956536368 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 4}]=0 v0) ==== 74+0+13 (unknown 1823589948 0 2995319903) 0x7fd4d0061e00 con 0x7fd4dc0a5520 2026-03-09T20:20:40.596 INFO:teuthology.orchestra.run.vm05.stdout:128849018892 2026-03-09T20:20:40.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.599+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/956536368 >> v1:192.168.123.105:6800/3290461294 conn(0x7fd4c0078270 legacy=0x7fd4c007a730 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.599+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/956536368 >> v1:192.168.123.109:6789/0 conn(0x7fd4dc0a5520 legacy=0x7fd4dc14ee80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:40.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.600+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/956536368 shutdown_connections 2026-03-09T20:20:40.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.600+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/956536368 >> 192.168.123.105:0/956536368 conn(0x7fd4dc01a440 msgr2=0x7fd4dc0a4240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:40.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.600+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/956536368 shutdown_connections 2026-03-09T20:20:40.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:40.600+0000 7fd4e9d3b640 1 -- 192.168.123.105:0/956536368 wait complete. 2026-03-09T20:20:40.744 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444048 got 73014444049 for osd.2 2026-03-09T20:20:40.744 DEBUG:teuthology.parallel:result is None 2026-03-09T20:20:40.758 INFO:tasks.cephadm.ceph_manager.ceph:need seq 128849018892 got 128849018892 for osd.4 2026-03-09T20:20:40.759 DEBUG:teuthology.parallel:result is None 2026-03-09T20:20:40.759 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T20:20:40.759 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph pg dump --format=json 2026-03-09T20:20:40.791 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:40 vm09 ceph-mon[54524]: pgmap v119: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 58 KiB/s rd, 4.4 KiB/s wr, 140 op/s 2026-03-09T20:20:40.791 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2529193243' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T20:20:40.791 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:40 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:40.791 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:40 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:40.791 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:40 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:40.791 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:40 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T20:20:40.791 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2279724817' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T20:20:40.791 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/567258792' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T20:20:40.796 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[51870]: pgmap v119: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 58 KiB/s rd, 4.4 KiB/s wr, 140 op/s 2026-03-09T20:20:40.796 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2529193243' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T20:20:40.796 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:40.796 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:40.796 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:40.796 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T20:20:40.796 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2279724817' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T20:20:40.796 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/567258792' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T20:20:40.796 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[61345]: pgmap v119: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 58 KiB/s rd, 4.4 KiB/s wr, 140 op/s 2026-03-09T20:20:40.796 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2529193243' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T20:20:40.796 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:40.796 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:40.796 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:40.797 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T20:20:40.797 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2279724817' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T20:20:40.797 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/567258792' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T20:20:40.982 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:41.069 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:40 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ignoring --setuser ceph since I am not root 2026-03-09T20:20:41.069 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:40 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ignoring --setgroup ceph since I am not root 2026-03-09T20:20:41.069 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:40 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:40.890+0000 7ff5347fa140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T20:20:41.069 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:40 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:40.935+0000 7ff5347fa140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T20:20:41.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.137+0000 7f008f9b2640 1 -- 192.168.123.105:0/2731934094 >> v1:192.168.123.109:6789/0 conn(0x7f0088111180 legacy=0x7f0088113640 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:41.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.141+0000 7f008f9b2640 1 -- 192.168.123.105:0/2731934094 shutdown_connections 2026-03-09T20:20:41.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.141+0000 7f008f9b2640 1 -- 192.168.123.105:0/2731934094 >> 192.168.123.105:0/2731934094 conn(0x7f0088100450 msgr2=0x7f0088102870 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:41.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.141+0000 7f008f9b2640 1 -- 192.168.123.105:0/2731934094 shutdown_connections 2026-03-09T20:20:41.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.141+0000 7f008f9b2640 1 -- 192.168.123.105:0/2731934094 wait complete. 2026-03-09T20:20:41.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.142+0000 7f008f9b2640 1 Processor -- start 2026-03-09T20:20:41.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.142+0000 7f008f9b2640 1 -- start start 2026-03-09T20:20:41.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.142+0000 7f008cf26640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f008810d5e0 0x7f00881a6470 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:57510/0 (socket says 192.168.123.105:57510) 2026-03-09T20:20:41.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.142+0000 7f008d727640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f008810a730 0x7f00881109f0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:56648/0 (socket says 192.168.123.105:56648) 2026-03-09T20:20:41.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.143+0000 7f008cf26640 1 -- 192.168.123.105:0/2149218065 learned_addr learned my addr 192.168.123.105:0/2149218065 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:41.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.143+0000 7f008f9b2640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f00881ab6c0 con 0x7f0088111180 2026-03-09T20:20:41.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.143+0000 7f008f9b2640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f00881ac8c0 con 0x7f008810a730 2026-03-09T20:20:41.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.143+0000 7f008f9b2640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f00881aca90 con 0x7f008810d5e0 2026-03-09T20:20:41.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.144+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 600686661 0 0) 0x7f00881ac8c0 con 0x7f008810a730 2026-03-09T20:20:41.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.144+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0060003620 con 0x7f008810a730 2026-03-09T20:20:41.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.144+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1217884831 0 0) 0x7f00881aca90 con 0x7f008810d5e0 2026-03-09T20:20:41.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.144+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f00881ac8c0 con 0x7f008810d5e0 2026-03-09T20:20:41.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.144+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3545543070 0 0) 0x7f00881ab6c0 con 0x7f0088111180 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.144+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f00881aca90 con 0x7f0088111180 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.144+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2523361472 0 0) 0x7f00881ac8c0 con 0x7f008810d5e0 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.144+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f00881ab6c0 con 0x7f008810d5e0 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.144+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f007c003090 con 0x7f008810d5e0 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.144+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 360771355 0 0) 0x7f0060003620 con 0x7f008810a730 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.144+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f00881ac8c0 con 0x7f008810a730 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.144+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f0078002fc0 con 0x7f008810a730 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.144+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2868988375 0 0) 0x7f00881aca90 con 0x7f0088111180 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.145+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f0060003620 con 0x7f0088111180 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.145+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f00840031c0 con 0x7f0088111180 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.145+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 4043204812 0 0) 0x7f00881ab6c0 con 0x7f008810d5e0 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.145+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 >> v1:192.168.123.109:6789/0 conn(0x7f008810a730 legacy=0x7f00881109f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.145+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 >> v1:192.168.123.105:6789/0 conn(0x7f0088111180 legacy=0x7f00881a9dc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:41.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.145+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f00881acc60 con 0x7f008810d5e0 2026-03-09T20:20:41.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.145+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f007c003b60 con 0x7f008810d5e0 2026-03-09T20:20:41.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.146+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f007c005b80 con 0x7f008810d5e0 2026-03-09T20:20:41.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.146+0000 7f008f9b2640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f00881acf50 con 0x7f008810d5e0 2026-03-09T20:20:41.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.146+0000 7f008f9b2640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f00881ad460 con 0x7f008810d5e0 2026-03-09T20:20:41.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.147+0000 7f008f9b2640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0088105e70 con 0x7f008810d5e0 2026-03-09T20:20:41.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.147+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 17) ==== 100065+0+0 (unknown 858876559 0 0) 0x7f007c003710 con 0x7f008810d5e0 2026-03-09T20:20:41.149 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.148+0000 7f008d727640 1 -- 192.168.123.105:0/2149218065 >> v1:192.168.123.105:6800/3290461294 conn(0x7f0060078470 legacy=0x7f006007a930 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/3290461294 2026-03-09T20:20:41.149 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.150+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 1300484666 0 0) 0x7f007c059940 con 0x7f008810d5e0 2026-03-09T20:20:41.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.150+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f007c05dd30 con 0x7f008810d5e0 2026-03-09T20:20:41.242 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.242+0000 7f008f9b2640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.105:6800/3290461294 -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f0088116f10 con 0x7f0060078470 2026-03-09T20:20:41.272 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:40 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: ignoring --setuser ceph since I am not root 2026-03-09T20:20:41.272 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:40 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: ignoring --setgroup ceph since I am not root 2026-03-09T20:20:41.272 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:40 vm09 ceph-mgr[55781]: -- 192.168.123.109:0/987239564 <== mon.2 v1:192.168.123.105:6790/0 6 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (unknown 1097485048 0 0) 0x55b983008000 con 0x55b982fe7000 2026-03-09T20:20:41.272 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:40 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:40.893+0000 7fdfb485e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T20:20:41.272 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:40 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:40.935+0000 7fdfb485e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T20:20:41.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.350+0000 7f008d727640 1 -- 192.168.123.105:0/2149218065 >> v1:192.168.123.105:6800/3290461294 conn(0x7f0060078470 legacy=0x7f006007a930 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/3290461294 2026-03-09T20:20:41.643 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:41 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:41.331+0000 7fdfb485e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T20:20:41.643 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1388260921' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:20:41.643 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/956536368' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T20:20:41.643 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:41 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:41.643 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:41 vm09 ceph-mon[54524]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T20:20:41.643 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:41 vm09 ceph-mon[54524]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-09T20:20:41.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1388260921' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:20:41.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/956536368' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T20:20:41.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:41 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:41.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:41 vm05 ceph-mon[61345]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T20:20:41.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:41 vm05 ceph-mon[61345]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-09T20:20:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1388260921' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:20:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/956536368' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T20:20:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' 2026-03-09T20:20:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:41 vm05 ceph-mon[51870]: from='mgr.14150 v1:192.168.123.105:0/3677528619' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T20:20:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:41 vm05 ceph-mon[51870]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-09T20:20:41.660 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:41 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:41.357+0000 7ff5347fa140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T20:20:41.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:41.750+0000 7f008d727640 1 -- 192.168.123.105:0/2149218065 >> v1:192.168.123.105:6800/3290461294 conn(0x7f0060078470 legacy=0x7f006007a930 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/3290461294 2026-03-09T20:20:42.022 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:41 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:41.642+0000 7fdfb485e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T20:20:42.023 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:41 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T20:20:42.023 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:41 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T20:20:42.023 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:41 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: from numpy import show_config as show_numpy_config 2026-03-09T20:20:42.023 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:41 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:41.726+0000 7fdfb485e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T20:20:42.023 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:41 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:41.761+0000 7fdfb485e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T20:20:42.023 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:41 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:41.830+0000 7fdfb485e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T20:20:42.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:41 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:41.670+0000 7ff5347fa140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T20:20:42.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:41 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T20:20:42.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:41 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T20:20:42.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:41 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: from numpy import show_config as show_numpy_config 2026-03-09T20:20:42.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:41 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:41.753+0000 7ff5347fa140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T20:20:42.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:41 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:41.788+0000 7ff5347fa140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T20:20:42.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:41 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:41.857+0000 7ff5347fa140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T20:20:42.551 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:42.551+0000 7f008d727640 1 -- 192.168.123.105:0/2149218065 >> v1:192.168.123.105:6800/3290461294 conn(0x7f0060078470 legacy=0x7f006007a930 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/3290461294 2026-03-09T20:20:42.591 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:42 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:42.328+0000 7fdfb485e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T20:20:42.591 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:42 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:42.437+0000 7fdfb485e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:20:42.591 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:42 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:42.478+0000 7fdfb485e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T20:20:42.591 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:42 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:42.512+0000 7fdfb485e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T20:20:42.591 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:42 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:42.553+0000 7fdfb485e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T20:20:42.637 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:42 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:42.370+0000 7ff5347fa140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T20:20:42.637 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:42 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:42.483+0000 7ff5347fa140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:20:42.637 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:42 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:42.523+0000 7ff5347fa140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T20:20:42.637 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:42 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:42.557+0000 7ff5347fa140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T20:20:42.637 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:42 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:42.599+0000 7ff5347fa140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T20:20:42.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:42 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:42.637+0000 7ff5347fa140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T20:20:42.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:42 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:42.810+0000 7ff5347fa140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T20:20:42.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:42 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:42.859+0000 7ff5347fa140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T20:20:43.022 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:42 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:42.590+0000 7fdfb485e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T20:20:43.022 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:42 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:42.762+0000 7fdfb485e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T20:20:43.022 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:42 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:42.814+0000 7fdfb485e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T20:20:43.327 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:43.037+0000 7fdfb485e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T20:20:43.383 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:43 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:43.090+0000 7ff5347fa140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T20:20:43.601 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:43.326+0000 7fdfb485e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T20:20:43.602 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:43.364+0000 7fdfb485e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T20:20:43.602 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:43.406+0000 7fdfb485e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T20:20:43.602 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:43.481+0000 7fdfb485e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T20:20:43.602 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:43.517+0000 7fdfb485e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T20:20:43.660 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:43 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:43.383+0000 7ff5347fa140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T20:20:43.660 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:43 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:43.420+0000 7ff5347fa140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T20:20:43.660 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:43 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:43.461+0000 7ff5347fa140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T20:20:43.660 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:43 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:43.540+0000 7ff5347fa140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T20:20:43.660 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:43 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:43.578+0000 7ff5347fa140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T20:20:43.863 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:43.600+0000 7fdfb485e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T20:20:43.863 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:43.723+0000 7fdfb485e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:20:43.928 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:43 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:43.663+0000 7ff5347fa140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T20:20:43.928 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:43 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:43.784+0000 7ff5347fa140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:20:44.134 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:44.134+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mgrmap(e 18) ==== 100065+0+0 (unknown 2450898538 0 0) 0x7f007c058cc0 con 0x7f008810d5e0 2026-03-09T20:20:44.144 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:43.862+0000 7fdfb485e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T20:20:44.144 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:43.899+0000 7fdfb485e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T20:20:44.144 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: [09/Mar/2026:20:20:43] ENGINE Bus STARTING 2026-03-09T20:20:44.144 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: CherryPy Checker: 2026-03-09T20:20:44.144 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: The Application mounted at '' has an empty config. 2026-03-09T20:20:44.144 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:43 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: 2026-03-09T20:20:44.144 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: [09/Mar/2026:20:20:44] ENGINE Serving on http://:::9283 2026-03-09T20:20:44.144 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:20:44 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x[55755]: [09/Mar/2026:20:20:44] ENGINE Bus STARTED 2026-03-09T20:20:44.152 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:44.153+0000 7f008d727640 1 -- 192.168.123.105:0/2149218065 >> v1:192.168.123.105:6800/3290461294 conn(0x7f0060078470 legacy=0x7f006007a930 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/3290461294 2026-03-09T20:20:44.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:44.160+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.2 v1:192.168.123.105:6790/0 11 ==== mgrmap(e 19) ==== 99714+0+0 (unknown 1668460923 0 0) 0x7f007c0595e0 con 0x7f008810d5e0 2026-03-09T20:20:44.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:44.160+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 >> v1:192.168.123.105:6800/3290461294 conn(0x7f0060078470 legacy=0x7f006007a930 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:20:44.235 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:44 vm05 ceph-mon[61345]: Standby manager daemon x restarted 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:44 vm05 ceph-mon[61345]: Standby manager daemon x started 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:44 vm05 ceph-mon[61345]: from='mgr.? v1:192.168.123.109:0/3378181101' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:44 vm05 ceph-mon[61345]: from='mgr.? v1:192.168.123.109:0/3378181101' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:44 vm05 ceph-mon[61345]: from='mgr.? v1:192.168.123.109:0/3378181101' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:44 vm05 ceph-mon[61345]: from='mgr.? v1:192.168.123.109:0/3378181101' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:44 vm05 ceph-mon[51870]: Standby manager daemon x restarted 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:44 vm05 ceph-mon[51870]: Standby manager daemon x started 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:44 vm05 ceph-mon[51870]: from='mgr.? v1:192.168.123.109:0/3378181101' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:44 vm05 ceph-mon[51870]: from='mgr.? v1:192.168.123.109:0/3378181101' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:44 vm05 ceph-mon[51870]: from='mgr.? v1:192.168.123.109:0/3378181101' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:44 vm05 ceph-mon[51870]: from='mgr.? v1:192.168.123.109:0/3378181101' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:43 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:43.928+0000 7ff5347fa140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:43 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:43.965+0000 7ff5347fa140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T20:20:44.236 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:44 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:20:44] ENGINE Bus STARTING 2026-03-09T20:20:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:44 vm09 ceph-mon[54524]: Standby manager daemon x restarted 2026-03-09T20:20:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:44 vm09 ceph-mon[54524]: Standby manager daemon x started 2026-03-09T20:20:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:44 vm09 ceph-mon[54524]: from='mgr.? v1:192.168.123.109:0/3378181101' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T20:20:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:44 vm09 ceph-mon[54524]: from='mgr.? v1:192.168.123.109:0/3378181101' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:20:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:44 vm09 ceph-mon[54524]: from='mgr.? v1:192.168.123.109:0/3378181101' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T20:20:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:44 vm09 ceph-mon[54524]: from='mgr.? v1:192.168.123.109:0/3378181101' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:20:44.671 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:44 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: CherryPy Checker: 2026-03-09T20:20:44.671 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:44 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: The Application mounted at '' has an empty config. 2026-03-09T20:20:44.671 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:44 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: 2026-03-09T20:20:44.671 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:44 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:20:44] ENGINE Serving on http://:::9283 2026-03-09T20:20:44.671 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:44 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:20:44] ENGINE Bus STARTED 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: mgrmap e18: y(active, since 2m), standbys: x 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: Active manager daemon y restarted 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: Activating manager daemon y 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: osdmap e59: 8 total, 8 up, 8 in 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: mgrmap e19: y(active, starting, since 0.0150357s), standbys: x 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:45.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:45.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:20:45.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:20:45.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:20:45.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: Manager daemon y is now available 2026-03-09T20:20:45.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:45.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:45.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:20:45.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:20:45.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:20:45.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T20:20:45.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T20:20:45.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:45.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:45.176 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.176+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mon.2 v1:192.168.123.105:6790/0 12 ==== mgrmap(e 20) ==== 99806+0+0 (unknown 3641764485 0 0) 0x7f007c059eb0 con 0x7f008810d5e0 2026-03-09T20:20:45.176 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.177+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 --> v1:192.168.123.105:6800/1903060503 -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f0088116f10 con 0x7f00600828f0 2026-03-09T20:20:45.195 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:20:45.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.195+0000 7f00767fc640 1 -- 192.168.123.105:0/2149218065 <== mgr.24602 v1:192.168.123.105:6800/1903060503 1 ==== mgr_command_reply(tid 0: 0 Warning: due to ceph-mgr restart, some PG states may not be up to date 2026-03-09T20:20:45.195 INFO:teuthology.orchestra.run.vm05.stderr:dumped all) ==== 89+0+314768 (unknown 3231119501 0 1862871322) 0x7f0088116f10 con 0x7f00600828f0 2026-03-09T20:20:45.198 INFO:teuthology.orchestra.run.vm05.stderr:Warning: due to ceph-mgr restart, some PG states may not be up to date 2026-03-09T20:20:45.198 INFO:teuthology.orchestra.run.vm05.stderr:dumped all 2026-03-09T20:20:45.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.200+0000 7f008f9b2640 1 -- 192.168.123.105:0/2149218065 >> v1:192.168.123.105:6800/1903060503 conn(0x7f00600828f0 legacy=0x7f0060084ce0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:45.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.200+0000 7f008f9b2640 1 -- 192.168.123.105:0/2149218065 >> v1:192.168.123.105:6790/0 conn(0x7f008810d5e0 legacy=0x7f00881a6470 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:45.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.200+0000 7f008df28640 1 -- 192.168.123.105:0/2149218065 reap_dead start 2026-03-09T20:20:45.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.200+0000 7f008f9b2640 1 -- 192.168.123.105:0/2149218065 shutdown_connections 2026-03-09T20:20:45.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.200+0000 7f008f9b2640 1 -- 192.168.123.105:0/2149218065 >> 192.168.123.105:0/2149218065 conn(0x7f0088100450 msgr2=0x7f0088103520 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:45.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.200+0000 7f008f9b2640 1 -- 192.168.123.105:0/2149218065 shutdown_connections 2026-03-09T20:20:45.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.200+0000 7f008f9b2640 1 -- 192.168.123.105:0/2149218065 wait complete. 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: mgrmap e18: y(active, since 2m), standbys: x 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: Active manager daemon y restarted 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: Activating manager daemon y 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: osdmap e59: 8 total, 8 up, 8 in 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: mgrmap e19: y(active, starting, since 0.0150357s), standbys: x 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: Manager daemon y is now available 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:20:45.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:20:45.275 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:20:45.275 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T20:20:45.275 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T20:20:45.275 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:45.275 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:45.275 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:20:45.372 INFO:teuthology.orchestra.run.vm05.stdout:{"pg_ready":false,"pg_map":{"version":2,"stamp":"2026-03-09T20:20:44.174461+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":0,"num_osds":0,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":0,"kb_used":0,"kb_used_data":0,"kb_used_omap":0,"kb_used_meta":0,"kb_avail":0,"statfs":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[{"pgid":"3.1f","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.18","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.1b","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.1e","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.19","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.1d","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.1a","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.1c","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.1b","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.1b","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.1c","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.1a","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.1d","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.1c","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.19","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.1e","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.1f","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.f","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.8","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.e","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.0","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.7","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.1","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.6","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.0","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.2","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.5","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.3","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.3","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.4","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.4","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.3","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.7","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.0","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.6","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.1","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.5","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.2","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"1.0","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.e","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.9","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.d","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.a","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.c","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.b","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.d","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.b","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.c","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.a","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.d","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.9","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.e","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.8","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.f","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.9","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.10","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.17","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.16","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.11","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.15","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.12","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.14","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.14","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.13","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.15","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.13","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.14","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.12","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.15","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.11","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.16","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.10","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.17","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"3.18","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"4.1f","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":0,"reported_epoch":0,"state":"unknown","last_fresh":"2026-03-09T20:20:44.142411+0000","last_change":"2026-03-09T20:20:44.142411+0000","last_active":"2026-03-09T20:20:44.142411+0000","last_peered":"2026-03-09T20:20:44.142411+0000","last_clean":"2026-03-09T20:20:44.142411+0000","last_became_active":"0.000000","last_became_peered":"0.000000","last_unstale":"2026-03-09T20:20:44.142411+0000","last_undegraded":"2026-03-09T20:20:44.142411+0000","last_fullsized":"2026-03-09T20:20:44.142411+0000","mapping_epoch":0,"log_start":"0'0","ondisk_log_start":"0'0","created":0,"last_epoch_clean":0,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:44.142411+0000","last_clean_scrub_stamp":"2026-03-09T20:20:44.142411+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"--","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[],"acting":[],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":-1,"acting_primary":-1,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},{"poolid":4,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0}],"osd_stats":[],"pool_statfs":[]}} 2026-03-09T20:20:45.374 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph pg dump --format=json 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: mgrmap e18: y(active, since 2m), standbys: x 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: Active manager daemon y restarted 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: Activating manager daemon y 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: osdmap e59: 8 total, 8 up, 8 in 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: mgrmap e19: y(active, starting, since 0.0150357s), standbys: x 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T20:20:45.413 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: Manager daemon y is now available 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:45.414 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:45.610 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:45.785 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.785+0000 7f62b58d0640 1 -- 192.168.123.105:0/387382271 >> v1:192.168.123.105:6789/0 conn(0x7f62b0077340 legacy=0x7f62b00797e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:45.785 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.785+0000 7f62b58d0640 1 -- 192.168.123.105:0/387382271 shutdown_connections 2026-03-09T20:20:45.785 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.785+0000 7f62b58d0640 1 -- 192.168.123.105:0/387382271 >> 192.168.123.105:0/387382271 conn(0x7f62b006d560 msgr2=0x7f62b006d970 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:45.786 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.786+0000 7f62b58d0640 1 -- 192.168.123.105:0/387382271 shutdown_connections 2026-03-09T20:20:45.786 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.786+0000 7f62b58d0640 1 -- 192.168.123.105:0/387382271 wait complete. 2026-03-09T20:20:45.786 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.786+0000 7f62b58d0640 1 Processor -- start 2026-03-09T20:20:45.786 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.786+0000 7f62b58d0640 1 -- start start 2026-03-09T20:20:45.786 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.786+0000 7f62b58d0640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f62b0086150 con 0x7f62b0074040 2026-03-09T20:20:45.786 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.786+0000 7f62b58d0640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f62b0086320 con 0x7f62b0085ca0 2026-03-09T20:20:45.786 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.786+0000 7f62b58d0640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f62b01c36b0 con 0x7f62b007af00 2026-03-09T20:20:45.786 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.786+0000 7f62affff640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f62b007af00 0x7f62b0085590 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:57584/0 (socket says 192.168.123.105:57584) 2026-03-09T20:20:45.786 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.786+0000 7f62affff640 1 -- 192.168.123.105:0/4109364722 learned_addr learned my addr 192.168.123.105:0/4109364722 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:45.786 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.787+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1236750560 0 0) 0x7f62b01c36b0 con 0x7f62b007af00 2026-03-09T20:20:45.786 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.787+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6294003600 con 0x7f62b007af00 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.787+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1696745424 0 0) 0x7f62b0086320 con 0x7f62b0085ca0 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.787+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f62b01c36b0 con 0x7f62b0085ca0 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.787+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 171235554 0 0) 0x7f62b0086150 con 0x7f62b0074040 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.788+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f62b0086320 con 0x7f62b0074040 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.788+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1255187372 0 0) 0x7f6294003600 con 0x7f62b007af00 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.788+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f62b0086150 con 0x7f62b007af00 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.788+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1032671905 0 0) 0x7f62b01c36b0 con 0x7f62b0085ca0 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.788+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6294003600 con 0x7f62b0085ca0 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.788+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f62a8003500 con 0x7f62b007af00 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.788+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f62a4002ff0 con 0x7f62b0085ca0 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.788+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 54426696 0 0) 0x7f62b0086320 con 0x7f62b0074040 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.788+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f62b01c36b0 con 0x7f62b0074040 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.788+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 864675290 0 0) 0x7f62b0086150 con 0x7f62b007af00 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.789+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 >> v1:192.168.123.109:6789/0 conn(0x7f62b0085ca0 legacy=0x7f62b01bff70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.789+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 >> v1:192.168.123.105:6789/0 conn(0x7f62b0074040 legacy=0x7f62b0083800 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.789+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f62b01c4890 con 0x7f62b007af00 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.789+0000 7f62b58d0640 1 -- 192.168.123.105:0/4109364722 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f62b01c3880 con 0x7f62b007af00 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.789+0000 7f62b58d0640 1 -- 192.168.123.105:0/4109364722 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f62b01c3d90 con 0x7f62b007af00 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.790+0000 7f62b58d0640 1 -- 192.168.123.105:0/4109364722 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f62b010a470 con 0x7f62b007af00 2026-03-09T20:20:45.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.790+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f62a8004040 con 0x7f62b007af00 2026-03-09T20:20:45.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.790+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f62a8005390 con 0x7f62b007af00 2026-03-09T20:20:45.794 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.794+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 20) ==== 99806+0+0 (unknown 3641764485 0 0) 0x7f62a8005530 con 0x7f62b007af00 2026-03-09T20:20:45.794 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.795+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(59..59 src has 1..59) ==== 6152+0+0 (unknown 1608023118 0 0) 0x7f62a8095c90 con 0x7f62b007af00 2026-03-09T20:20:45.795 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.795+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f62a8096130 con 0x7f62b007af00 2026-03-09T20:20:45.896 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.893+0000 7f62b58d0640 1 -- 192.168.123.105:0/4109364722 --> v1:192.168.123.105:6800/1903060503 -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f62b0080c40 con 0x7f6294078790 2026-03-09T20:20:45.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.898+0000 7f62adffb640 1 -- 192.168.123.105:0/4109364722 <== mgr.24602 v1:192.168.123.105:6800/1903060503 1 ==== mgr_command_reply(tid 0: 0 dumped all) ==== 18+0+346517 (unknown 2965378022 0 2157584147) 0x7f62b0080c40 con 0x7f6294078790 2026-03-09T20:20:45.898 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:20:45.899 INFO:teuthology.orchestra.run.vm05.stderr:dumped all 2026-03-09T20:20:45.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.902+0000 7f62b58d0640 1 -- 192.168.123.105:0/4109364722 >> v1:192.168.123.105:6800/1903060503 conn(0x7f6294078790 legacy=0x7f629407ac30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:45.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.902+0000 7f62b58d0640 1 -- 192.168.123.105:0/4109364722 >> v1:192.168.123.105:6790/0 conn(0x7f62b007af00 legacy=0x7f62b0085590 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:45.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.902+0000 7f62b58d0640 1 -- 192.168.123.105:0/4109364722 shutdown_connections 2026-03-09T20:20:45.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.902+0000 7f62b58d0640 1 -- 192.168.123.105:0/4109364722 >> 192.168.123.105:0/4109364722 conn(0x7f62b006d560 msgr2=0x7f62b010f240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:45.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.902+0000 7f62b58d0640 1 -- 192.168.123.105:0/4109364722 shutdown_connections 2026-03-09T20:20:45.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:45.902+0000 7f62b58d0640 1 -- 192.168.123.105:0/4109364722 wait complete. 2026-03-09T20:20:46.064 INFO:teuthology.orchestra.run.vm05.stdout:{"pg_ready":true,"pg_map":{"version":3,"stamp":"2026-03-09T20:20:45.199005+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":785,"num_read_kb":528,"num_write":493,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":505,"ondisk_log_size":505,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":220792,"kb_used_data":6076,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518600,"statfs":{"total":171765137408,"available":171539046400,"internally_reserved":0,"allocated":6221824,"data_stored":3196561,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":15,"apply_latency_ms":15,"commit_latency_ns":15000000,"apply_latency_ns":15000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[{"pgid":"3.1f","version":"50'1","reported_seq":35,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207936+0000","last_change":"2026-03-09T20:20:25.074795+0000","last_active":"2026-03-09T20:20:44.207936+0000","last_peered":"2026-03-09T20:20:44.207936+0000","last_clean":"2026-03-09T20:20:44.207936+0000","last_became_active":"2026-03-09T20:20:25.074587+0000","last_became_peered":"2026-03-09T20:20:25.074587+0000","last_unstale":"2026-03-09T20:20:44.207936+0000","last_undegraded":"2026-03-09T20:20:44.207936+0000","last_fullsized":"2026-03-09T20:20:44.207936+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:59:28.991350+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.18","version":"57'9","reported_seq":41,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210056+0000","last_change":"2026-03-09T20:20:27.095670+0000","last_active":"2026-03-09T20:20:44.210056+0000","last_peered":"2026-03-09T20:20:44.210056+0000","last_clean":"2026-03-09T20:20:44.210056+0000","last_became_active":"2026-03-09T20:20:27.095593+0000","last_became_peered":"2026-03-09T20:20:27.095593+0000","last_unstale":"2026-03-09T20:20:44.210056+0000","last_undegraded":"2026-03-09T20:20:44.210056+0000","last_fullsized":"2026-03-09T20:20:44.210056+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:27:34.553453+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.211913+0000","last_change":"2026-03-09T20:20:29.106470+0000","last_active":"2026-03-09T20:20:44.211913+0000","last_peered":"2026-03-09T20:20:44.211913+0000","last_clean":"2026-03-09T20:20:44.211913+0000","last_became_active":"2026-03-09T20:20:29.104210+0000","last_became_peered":"2026-03-09T20:20:29.104210+0000","last_unstale":"2026-03-09T20:20:44.211913+0000","last_undegraded":"2026-03-09T20:20:44.211913+0000","last_fullsized":"2026-03-09T20:20:44.211913+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:19:08.775003+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210030+0000","last_change":"2026-03-09T20:20:31.138050+0000","last_active":"2026-03-09T20:20:44.210030+0000","last_peered":"2026-03-09T20:20:44.210030+0000","last_clean":"2026-03-09T20:20:44.210030+0000","last_became_active":"2026-03-09T20:20:31.137942+0000","last_became_peered":"2026-03-09T20:20:31.137942+0000","last_unstale":"2026-03-09T20:20:44.210030+0000","last_undegraded":"2026-03-09T20:20:44.210030+0000","last_fullsized":"2026-03-09T20:20:44.210030+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:05:11.238957+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1b","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210124+0000","last_change":"2026-03-09T20:20:31.249608+0000","last_active":"2026-03-09T20:20:44.210124+0000","last_peered":"2026-03-09T20:20:44.210124+0000","last_clean":"2026-03-09T20:20:44.210124+0000","last_became_active":"2026-03-09T20:20:31.249464+0000","last_became_peered":"2026-03-09T20:20:31.249464+0000","last_unstale":"2026-03-09T20:20:44.210124+0000","last_undegraded":"2026-03-09T20:20:44.210124+0000","last_fullsized":"2026-03-09T20:20:44.210124+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:59:23.201604+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1e","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210296+0000","last_change":"2026-03-09T20:20:25.082535+0000","last_active":"2026-03-09T20:20:44.210296+0000","last_peered":"2026-03-09T20:20:44.210296+0000","last_clean":"2026-03-09T20:20:44.210296+0000","last_became_active":"2026-03-09T20:20:25.082460+0000","last_became_peered":"2026-03-09T20:20:25.082460+0000","last_unstale":"2026-03-09T20:20:44.210296+0000","last_undegraded":"2026-03-09T20:20:44.210296+0000","last_fullsized":"2026-03-09T20:20:44.210296+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:04:06.831850+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.19","version":"57'15","reported_seq":50,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210151+0000","last_change":"2026-03-09T20:20:27.103977+0000","last_active":"2026-03-09T20:20:44.210151+0000","last_peered":"2026-03-09T20:20:44.210151+0000","last_clean":"2026-03-09T20:20:44.210151+0000","last_became_active":"2026-03-09T20:20:27.103733+0000","last_became_peered":"2026-03-09T20:20:27.103733+0000","last_unstale":"2026-03-09T20:20:44.210151+0000","last_undegraded":"2026-03-09T20:20:44.210151+0000","last_fullsized":"2026-03-09T20:20:44.210151+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:55:57.842631+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,2,0],"acting":[3,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208883+0000","last_change":"2026-03-09T20:20:29.111054+0000","last_active":"2026-03-09T20:20:44.208883+0000","last_peered":"2026-03-09T20:20:44.208883+0000","last_clean":"2026-03-09T20:20:44.208883+0000","last_became_active":"2026-03-09T20:20:29.110878+0000","last_became_peered":"2026-03-09T20:20:29.110878+0000","last_unstale":"2026-03-09T20:20:44.208883+0000","last_undegraded":"2026-03-09T20:20:44.208883+0000","last_fullsized":"2026-03-09T20:20:44.208883+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:23:10.316453+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1d","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.157584+0000","last_change":"2026-03-09T20:20:25.087332+0000","last_active":"2026-03-09T20:20:44.157584+0000","last_peered":"2026-03-09T20:20:44.157584+0000","last_clean":"2026-03-09T20:20:44.157584+0000","last_became_active":"2026-03-09T20:20:25.087083+0000","last_became_peered":"2026-03-09T20:20:25.087083+0000","last_unstale":"2026-03-09T20:20:44.157584+0000","last_undegraded":"2026-03-09T20:20:44.157584+0000","last_fullsized":"2026-03-09T20:20:44.157584+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:46:38.659332+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1a","version":"57'9","reported_seq":41,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209905+0000","last_change":"2026-03-09T20:20:27.110039+0000","last_active":"2026-03-09T20:20:44.209905+0000","last_peered":"2026-03-09T20:20:44.209905+0000","last_clean":"2026-03-09T20:20:44.209905+0000","last_became_active":"2026-03-09T20:20:27.109937+0000","last_became_peered":"2026-03-09T20:20:27.109937+0000","last_unstale":"2026-03-09T20:20:44.209905+0000","last_undegraded":"2026-03-09T20:20:44.209905+0000","last_fullsized":"2026-03-09T20:20:44.209905+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:42:17.235033+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,0],"acting":[4,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.153839+0000","last_change":"2026-03-09T20:20:29.102451+0000","last_active":"2026-03-09T20:20:44.153839+0000","last_peered":"2026-03-09T20:20:44.153839+0000","last_clean":"2026-03-09T20:20:44.153839+0000","last_became_active":"2026-03-09T20:20:29.102182+0000","last_became_peered":"2026-03-09T20:20:29.102182+0000","last_unstale":"2026-03-09T20:20:44.153839+0000","last_undegraded":"2026-03-09T20:20:44.153839+0000","last_fullsized":"2026-03-09T20:20:44.153839+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:26:25.221504+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208132+0000","last_change":"2026-03-09T20:20:31.103337+0000","last_active":"2026-03-09T20:20:44.208132+0000","last_peered":"2026-03-09T20:20:44.208132+0000","last_clean":"2026-03-09T20:20:44.208132+0000","last_became_active":"2026-03-09T20:20:31.103258+0000","last_became_peered":"2026-03-09T20:20:31.103258+0000","last_unstale":"2026-03-09T20:20:44.208132+0000","last_undegraded":"2026-03-09T20:20:44.208132+0000","last_fullsized":"2026-03-09T20:20:44.208132+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:45:30.447375+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1c","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.154027+0000","last_change":"2026-03-09T20:20:25.086638+0000","last_active":"2026-03-09T20:20:44.154027+0000","last_peered":"2026-03-09T20:20:44.154027+0000","last_clean":"2026-03-09T20:20:44.154027+0000","last_became_active":"2026-03-09T20:20:25.086502+0000","last_became_peered":"2026-03-09T20:20:25.086502+0000","last_unstale":"2026-03-09T20:20:44.154027+0000","last_undegraded":"2026-03-09T20:20:44.154027+0000","last_fullsized":"2026-03-09T20:20:44.154027+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:56:26.934241+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1b","version":"57'5","reported_seq":35,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209339+0000","last_change":"2026-03-09T20:20:27.109772+0000","last_active":"2026-03-09T20:20:44.209339+0000","last_peered":"2026-03-09T20:20:44.209339+0000","last_clean":"2026-03-09T20:20:44.209339+0000","last_became_active":"2026-03-09T20:20:27.109668+0000","last_became_peered":"2026-03-09T20:20:27.109668+0000","last_unstale":"2026-03-09T20:20:44.209339+0000","last_undegraded":"2026-03-09T20:20:44.209339+0000","last_fullsized":"2026-03-09T20:20:44.209339+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:03:28.157237+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,1],"acting":[4,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207091+0000","last_change":"2026-03-09T20:20:29.113168+0000","last_active":"2026-03-09T20:20:44.207091+0000","last_peered":"2026-03-09T20:20:44.207091+0000","last_clean":"2026-03-09T20:20:44.207091+0000","last_became_active":"2026-03-09T20:20:29.112801+0000","last_became_peered":"2026-03-09T20:20:29.112801+0000","last_unstale":"2026-03-09T20:20:44.207091+0000","last_undegraded":"2026-03-09T20:20:44.207091+0000","last_fullsized":"2026-03-09T20:20:44.207091+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:38:31.585636+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.153999+0000","last_change":"2026-03-09T20:20:31.115336+0000","last_active":"2026-03-09T20:20:44.153999+0000","last_peered":"2026-03-09T20:20:44.153999+0000","last_clean":"2026-03-09T20:20:44.153999+0000","last_became_active":"2026-03-09T20:20:31.115219+0000","last_became_peered":"2026-03-09T20:20:31.115219+0000","last_unstale":"2026-03-09T20:20:44.153999+0000","last_undegraded":"2026-03-09T20:20:44.153999+0000","last_fullsized":"2026-03-09T20:20:44.153999+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:02:51.962337+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209117+0000","last_change":"2026-03-09T20:20:31.250898+0000","last_active":"2026-03-09T20:20:44.209117+0000","last_peered":"2026-03-09T20:20:44.209117+0000","last_clean":"2026-03-09T20:20:44.209117+0000","last_became_active":"2026-03-09T20:20:31.250642+0000","last_became_peered":"2026-03-09T20:20:31.250642+0000","last_unstale":"2026-03-09T20:20:44.209117+0000","last_undegraded":"2026-03-09T20:20:44.209117+0000","last_fullsized":"2026-03-09T20:20:44.209117+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:34:13.794191+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1b","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207873+0000","last_change":"2026-03-09T20:20:25.092158+0000","last_active":"2026-03-09T20:20:44.207873+0000","last_peered":"2026-03-09T20:20:44.207873+0000","last_clean":"2026-03-09T20:20:44.207873+0000","last_became_active":"2026-03-09T20:20:25.091921+0000","last_became_peered":"2026-03-09T20:20:25.091921+0000","last_unstale":"2026-03-09T20:20:44.207873+0000","last_undegraded":"2026-03-09T20:20:44.207873+0000","last_fullsized":"2026-03-09T20:20:44.207873+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:54:17.023620+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.1c","version":"57'15","reported_seq":50,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.213594+0000","last_change":"2026-03-09T20:20:27.120518+0000","last_active":"2026-03-09T20:20:44.213594+0000","last_peered":"2026-03-09T20:20:44.213594+0000","last_clean":"2026-03-09T20:20:44.213594+0000","last_became_active":"2026-03-09T20:20:27.120319+0000","last_became_peered":"2026-03-09T20:20:27.120319+0000","last_unstale":"2026-03-09T20:20:44.213594+0000","last_undegraded":"2026-03-09T20:20:44.213594+0000","last_fullsized":"2026-03-09T20:20:44.213594+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:19:21.870322+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,3],"acting":[2,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.211690+0000","last_change":"2026-03-09T20:20:29.106383+0000","last_active":"2026-03-09T20:20:44.211690+0000","last_peered":"2026-03-09T20:20:44.211690+0000","last_clean":"2026-03-09T20:20:44.211690+0000","last_became_active":"2026-03-09T20:20:29.104077+0000","last_became_peered":"2026-03-09T20:20:29.104077+0000","last_unstale":"2026-03-09T20:20:44.211690+0000","last_undegraded":"2026-03-09T20:20:44.211690+0000","last_fullsized":"2026-03-09T20:20:44.211690+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:00:01.245236+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210771+0000","last_change":"2026-03-09T20:20:31.250276+0000","last_active":"2026-03-09T20:20:44.210771+0000","last_peered":"2026-03-09T20:20:44.210771+0000","last_clean":"2026-03-09T20:20:44.210771+0000","last_became_active":"2026-03-09T20:20:31.249788+0000","last_became_peered":"2026-03-09T20:20:31.249788+0000","last_unstale":"2026-03-09T20:20:44.210771+0000","last_undegraded":"2026-03-09T20:20:44.210771+0000","last_fullsized":"2026-03-09T20:20:44.210771+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:55:08.653497+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209483+0000","last_change":"2026-03-09T20:20:25.087966+0000","last_active":"2026-03-09T20:20:44.209483+0000","last_peered":"2026-03-09T20:20:44.209483+0000","last_clean":"2026-03-09T20:20:44.209483+0000","last_became_active":"2026-03-09T20:20:25.087850+0000","last_became_peered":"2026-03-09T20:20:25.087850+0000","last_unstale":"2026-03-09T20:20:44.209483+0000","last_undegraded":"2026-03-09T20:20:44.209483+0000","last_fullsized":"2026-03-09T20:20:44.209483+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:24:32.929101+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1d","version":"57'12","reported_seq":48,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210803+0000","last_change":"2026-03-09T20:20:27.101054+0000","last_active":"2026-03-09T20:20:44.210803+0000","last_peered":"2026-03-09T20:20:44.210803+0000","last_clean":"2026-03-09T20:20:44.210803+0000","last_became_active":"2026-03-09T20:20:27.100815+0000","last_became_peered":"2026-03-09T20:20:27.100815+0000","last_unstale":"2026-03-09T20:20:44.210803+0000","last_undegraded":"2026-03-09T20:20:44.210803+0000","last_fullsized":"2026-03-09T20:20:44.210803+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:11:29.356519+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209443+0000","last_change":"2026-03-09T20:20:29.103620+0000","last_active":"2026-03-09T20:20:44.209443+0000","last_peered":"2026-03-09T20:20:44.209443+0000","last_clean":"2026-03-09T20:20:44.209443+0000","last_became_active":"2026-03-09T20:20:29.103532+0000","last_became_peered":"2026-03-09T20:20:29.103532+0000","last_unstale":"2026-03-09T20:20:44.209443+0000","last_undegraded":"2026-03-09T20:20:44.209443+0000","last_fullsized":"2026-03-09T20:20:44.209443+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:43:06.970123+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1c","version":"57'1","reported_seq":18,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208057+0000","last_change":"2026-03-09T20:20:31.129275+0000","last_active":"2026-03-09T20:20:44.208057+0000","last_peered":"2026-03-09T20:20:44.208057+0000","last_clean":"2026-03-09T20:20:44.208057+0000","last_became_active":"2026-03-09T20:20:31.128959+0000","last_became_peered":"2026-03-09T20:20:31.128959+0000","last_unstale":"2026-03-09T20:20:44.208057+0000","last_undegraded":"2026-03-09T20:20:44.208057+0000","last_fullsized":"2026-03-09T20:20:44.208057+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:30:40.315380+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"50'1","reported_seq":30,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.212350+0000","last_change":"2026-03-09T20:20:25.087776+0000","last_active":"2026-03-09T20:20:44.212350+0000","last_peered":"2026-03-09T20:20:44.212350+0000","last_clean":"2026-03-09T20:20:44.212350+0000","last_became_active":"2026-03-09T20:20:25.087660+0000","last_became_peered":"2026-03-09T20:20:25.087660+0000","last_unstale":"2026-03-09T20:20:44.212350+0000","last_undegraded":"2026-03-09T20:20:44.212350+0000","last_fullsized":"2026-03-09T20:20:44.212350+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:18:20.292257+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.1e","version":"57'10","reported_seq":40,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208533+0000","last_change":"2026-03-09T20:20:27.265191+0000","last_active":"2026-03-09T20:20:44.208533+0000","last_peered":"2026-03-09T20:20:44.208533+0000","last_clean":"2026-03-09T20:20:44.208533+0000","last_became_active":"2026-03-09T20:20:27.264986+0000","last_became_peered":"2026-03-09T20:20:27.264986+0000","last_unstale":"2026-03-09T20:20:44.208533+0000","last_undegraded":"2026-03-09T20:20:44.208533+0000","last_fullsized":"2026-03-09T20:20:44.208533+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:04:07.482369+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1f","version":"57'8","reported_seq":37,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.161635+0000","last_change":"2026-03-09T20:20:29.106178+0000","last_active":"2026-03-09T20:20:44.161635+0000","last_peered":"2026-03-09T20:20:44.161635+0000","last_clean":"2026-03-09T20:20:44.161635+0000","last_became_active":"2026-03-09T20:20:29.105851+0000","last_became_peered":"2026-03-09T20:20:29.105851+0000","last_unstale":"2026-03-09T20:20:44.161635+0000","last_undegraded":"2026-03-09T20:20:44.161635+0000","last_fullsized":"2026-03-09T20:20:44.161635+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:54:48.750457+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.f","version":"57'15","reported_seq":50,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.212621+0000","last_change":"2026-03-09T20:20:27.109770+0000","last_active":"2026-03-09T20:20:44.212621+0000","last_peered":"2026-03-09T20:20:44.212621+0000","last_clean":"2026-03-09T20:20:44.212621+0000","last_became_active":"2026-03-09T20:20:27.109462+0000","last_became_peered":"2026-03-09T20:20:27.109462+0000","last_unstale":"2026-03-09T20:20:44.212621+0000","last_undegraded":"2026-03-09T20:20:44.212621+0000","last_fullsized":"2026-03-09T20:20:44.212621+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:49:10.215976+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.8","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210216+0000","last_change":"2026-03-09T20:20:25.088283+0000","last_active":"2026-03-09T20:20:44.210216+0000","last_peered":"2026-03-09T20:20:44.210216+0000","last_clean":"2026-03-09T20:20:44.210216+0000","last_became_active":"2026-03-09T20:20:25.088209+0000","last_became_peered":"2026-03-09T20:20:25.088209+0000","last_unstale":"2026-03-09T20:20:44.210216+0000","last_undegraded":"2026-03-09T20:20:44.210216+0000","last_fullsized":"2026-03-09T20:20:44.210216+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:30:34.868814+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.e","version":"57'8","reported_seq":34,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209004+0000","last_change":"2026-03-09T20:20:29.111330+0000","last_active":"2026-03-09T20:20:44.209004+0000","last_peered":"2026-03-09T20:20:44.209004+0000","last_clean":"2026-03-09T20:20:44.209004+0000","last_became_active":"2026-03-09T20:20:29.110769+0000","last_became_peered":"2026-03-09T20:20:29.110769+0000","last_unstale":"2026-03-09T20:20:44.209004+0000","last_undegraded":"2026-03-09T20:20:44.209004+0000","last_fullsized":"2026-03-09T20:20:44.209004+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:16:08.105264+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.153579+0000","last_change":"2026-03-09T20:20:31.104804+0000","last_active":"2026-03-09T20:20:44.153579+0000","last_peered":"2026-03-09T20:20:44.153579+0000","last_clean":"2026-03-09T20:20:44.153579+0000","last_became_active":"2026-03-09T20:20:31.104696+0000","last_became_peered":"2026-03-09T20:20:31.104696+0000","last_unstale":"2026-03-09T20:20:44.153579+0000","last_undegraded":"2026-03-09T20:20:44.153579+0000","last_fullsized":"2026-03-09T20:20:44.153579+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:56:57.777870+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.0","version":"57'18","reported_seq":57,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210337+0000","last_change":"2026-03-09T20:20:27.266671+0000","last_active":"2026-03-09T20:20:44.210337+0000","last_peered":"2026-03-09T20:20:44.210337+0000","last_clean":"2026-03-09T20:20:44.210337+0000","last_became_active":"2026-03-09T20:20:27.266387+0000","last_became_peered":"2026-03-09T20:20:27.266387+0000","last_unstale":"2026-03-09T20:20:44.210337+0000","last_undegraded":"2026-03-09T20:20:44.210337+0000","last_fullsized":"2026-03-09T20:20:44.210337+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:18:22.297101+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.7","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210298+0000","last_change":"2026-03-09T20:20:25.087816+0000","last_active":"2026-03-09T20:20:44.210298+0000","last_peered":"2026-03-09T20:20:44.210298+0000","last_clean":"2026-03-09T20:20:44.210298+0000","last_became_active":"2026-03-09T20:20:25.087711+0000","last_became_peered":"2026-03-09T20:20:25.087711+0000","last_unstale":"2026-03-09T20:20:44.210298+0000","last_undegraded":"2026-03-09T20:20:44.210298+0000","last_fullsized":"2026-03-09T20:20:44.210298+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:46:12.701961+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208921+0000","last_change":"2026-03-09T20:20:29.103375+0000","last_active":"2026-03-09T20:20:44.208921+0000","last_peered":"2026-03-09T20:20:44.208921+0000","last_clean":"2026-03-09T20:20:44.208921+0000","last_became_active":"2026-03-09T20:20:29.103285+0000","last_became_peered":"2026-03-09T20:20:29.103285+0000","last_unstale":"2026-03-09T20:20:44.208921+0000","last_undegraded":"2026-03-09T20:20:44.208921+0000","last_fullsized":"2026-03-09T20:20:44.208921+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:57:43.722816+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208880+0000","last_change":"2026-03-09T20:20:31.130321+0000","last_active":"2026-03-09T20:20:44.208880+0000","last_peered":"2026-03-09T20:20:44.208880+0000","last_clean":"2026-03-09T20:20:44.208880+0000","last_became_active":"2026-03-09T20:20:31.129961+0000","last_became_peered":"2026-03-09T20:20:31.129961+0000","last_unstale":"2026-03-09T20:20:44.208880+0000","last_undegraded":"2026-03-09T20:20:44.208880+0000","last_fullsized":"2026-03-09T20:20:44.208880+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:11:36.013545+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1","version":"57'14","reported_seq":46,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209223+0000","last_change":"2026-03-09T20:20:27.094332+0000","last_active":"2026-03-09T20:20:44.209223+0000","last_peered":"2026-03-09T20:20:44.209223+0000","last_clean":"2026-03-09T20:20:44.209223+0000","last_became_active":"2026-03-09T20:20:27.094255+0000","last_became_peered":"2026-03-09T20:20:27.094255+0000","last_unstale":"2026-03-09T20:20:44.209223+0000","last_undegraded":"2026-03-09T20:20:44.209223+0000","last_fullsized":"2026-03-09T20:20:44.209223+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:37:09.136466+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.6","version":"50'1","reported_seq":30,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207928+0000","last_change":"2026-03-09T20:20:25.094137+0000","last_active":"2026-03-09T20:20:44.207928+0000","last_peered":"2026-03-09T20:20:44.207928+0000","last_clean":"2026-03-09T20:20:44.207928+0000","last_became_active":"2026-03-09T20:20:25.094006+0000","last_became_peered":"2026-03-09T20:20:25.094006+0000","last_unstale":"2026-03-09T20:20:44.207928+0000","last_undegraded":"2026-03-09T20:20:44.207928+0000","last_fullsized":"2026-03-09T20:20:44.207928+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:54:49.194948+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.0","version":"57'8","reported_seq":34,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209791+0000","last_change":"2026-03-09T20:20:29.113259+0000","last_active":"2026-03-09T20:20:44.209791+0000","last_peered":"2026-03-09T20:20:44.209791+0000","last_clean":"2026-03-09T20:20:44.209791+0000","last_became_active":"2026-03-09T20:20:29.111693+0000","last_became_peered":"2026-03-09T20:20:29.111693+0000","last_unstale":"2026-03-09T20:20:44.209791+0000","last_undegraded":"2026-03-09T20:20:44.209791+0000","last_fullsized":"2026-03-09T20:20:44.209791+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:10:33.721227+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207463+0000","last_change":"2026-03-09T20:20:31.247833+0000","last_active":"2026-03-09T20:20:44.207463+0000","last_peered":"2026-03-09T20:20:44.207463+0000","last_clean":"2026-03-09T20:20:44.207463+0000","last_became_active":"2026-03-09T20:20:31.247091+0000","last_became_peered":"2026-03-09T20:20:31.247091+0000","last_unstale":"2026-03-09T20:20:44.207463+0000","last_undegraded":"2026-03-09T20:20:44.207463+0000","last_fullsized":"2026-03-09T20:20:44.207463+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:19:34.243192+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.2","version":"57'10","reported_seq":40,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.212050+0000","last_change":"2026-03-09T20:20:27.096567+0000","last_active":"2026-03-09T20:20:44.212050+0000","last_peered":"2026-03-09T20:20:44.212050+0000","last_clean":"2026-03-09T20:20:44.212050+0000","last_became_active":"2026-03-09T20:20:27.095275+0000","last_became_peered":"2026-03-09T20:20:27.095275+0000","last_unstale":"2026-03-09T20:20:44.212050+0000","last_undegraded":"2026-03-09T20:20:44.212050+0000","last_fullsized":"2026-03-09T20:20:44.212050+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:37:40.006746+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.154221+0000","last_change":"2026-03-09T20:20:25.092970+0000","last_active":"2026-03-09T20:20:44.154221+0000","last_peered":"2026-03-09T20:20:44.154221+0000","last_clean":"2026-03-09T20:20:44.154221+0000","last_became_active":"2026-03-09T20:20:25.092473+0000","last_became_peered":"2026-03-09T20:20:25.092473+0000","last_unstale":"2026-03-09T20:20:44.154221+0000","last_undegraded":"2026-03-09T20:20:44.154221+0000","last_fullsized":"2026-03-09T20:20:44.154221+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:21:04.309473+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.3","version":"57'8","reported_seq":37,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208588+0000","last_change":"2026-03-09T20:20:29.106483+0000","last_active":"2026-03-09T20:20:44.208588+0000","last_peered":"2026-03-09T20:20:44.208588+0000","last_clean":"2026-03-09T20:20:44.208588+0000","last_became_active":"2026-03-09T20:20:29.105980+0000","last_became_peered":"2026-03-09T20:20:29.105980+0000","last_unstale":"2026-03-09T20:20:44.208588+0000","last_undegraded":"2026-03-09T20:20:44.208588+0000","last_fullsized":"2026-03-09T20:20:44.208588+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:54:27.270115+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208564+0000","last_change":"2026-03-09T20:20:31.126105+0000","last_active":"2026-03-09T20:20:44.208564+0000","last_peered":"2026-03-09T20:20:44.208564+0000","last_clean":"2026-03-09T20:20:44.208564+0000","last_became_active":"2026-03-09T20:20:31.125960+0000","last_became_peered":"2026-03-09T20:20:31.125960+0000","last_unstale":"2026-03-09T20:20:44.208564+0000","last_undegraded":"2026-03-09T20:20:44.208564+0000","last_fullsized":"2026-03-09T20:20:44.208564+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:01:50.793635+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.3","version":"57'19","reported_seq":61,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208362+0000","last_change":"2026-03-09T20:20:27.265301+0000","last_active":"2026-03-09T20:20:44.208362+0000","last_peered":"2026-03-09T20:20:44.208362+0000","last_clean":"2026-03-09T20:20:44.208362+0000","last_became_active":"2026-03-09T20:20:27.265072+0000","last_became_peered":"2026-03-09T20:20:27.265072+0000","last_unstale":"2026-03-09T20:20:44.208362+0000","last_undegraded":"2026-03-09T20:20:44.208362+0000","last_fullsized":"2026-03-09T20:20:44.208362+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:42:30.517540+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,7],"acting":[0,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.4","version":"50'1","reported_seq":35,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.212401+0000","last_change":"2026-03-09T20:20:25.074218+0000","last_active":"2026-03-09T20:20:44.212401+0000","last_peered":"2026-03-09T20:20:44.212401+0000","last_clean":"2026-03-09T20:20:44.212401+0000","last_became_active":"2026-03-09T20:20:25.074105+0000","last_became_peered":"2026-03-09T20:20:25.074105+0000","last_unstale":"2026-03-09T20:20:44.212401+0000","last_undegraded":"2026-03-09T20:20:44.212401+0000","last_fullsized":"2026-03-09T20:20:44.212401+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:29:54.650308+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.161595+0000","last_change":"2026-03-09T20:20:29.098516+0000","last_active":"2026-03-09T20:20:44.161595+0000","last_peered":"2026-03-09T20:20:44.161595+0000","last_clean":"2026-03-09T20:20:44.161595+0000","last_became_active":"2026-03-09T20:20:29.095388+0000","last_became_peered":"2026-03-09T20:20:29.095388+0000","last_unstale":"2026-03-09T20:20:44.161595+0000","last_undegraded":"2026-03-09T20:20:44.161595+0000","last_fullsized":"2026-03-09T20:20:44.161595+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:24:39.879507+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.212398+0000","last_change":"2026-03-09T20:20:31.246978+0000","last_active":"2026-03-09T20:20:44.212398+0000","last_peered":"2026-03-09T20:20:44.212398+0000","last_clean":"2026-03-09T20:20:44.212398+0000","last_became_active":"2026-03-09T20:20:31.246792+0000","last_became_peered":"2026-03-09T20:20:31.246792+0000","last_unstale":"2026-03-09T20:20:44.212398+0000","last_undegraded":"2026-03-09T20:20:44.212398+0000","last_fullsized":"2026-03-09T20:20:44.212398+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:21:54.327212+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.4","version":"57'28","reported_seq":78,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.212327+0000","last_change":"2026-03-09T20:20:27.109872+0000","last_active":"2026-03-09T20:20:44.212327+0000","last_peered":"2026-03-09T20:20:44.212327+0000","last_clean":"2026-03-09T20:20:44.212327+0000","last_became_active":"2026-03-09T20:20:27.109651+0000","last_became_peered":"2026-03-09T20:20:27.109651+0000","last_unstale":"2026-03-09T20:20:44.212327+0000","last_undegraded":"2026-03-09T20:20:44.212327+0000","last_fullsized":"2026-03-09T20:20:44.212327+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":28,"log_dups_size":0,"ondisk_log_size":28,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:00:15.952806+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":48,"num_read_kb":33,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,3],"acting":[1,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.3","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209773+0000","last_change":"2026-03-09T20:20:25.098561+0000","last_active":"2026-03-09T20:20:44.209773+0000","last_peered":"2026-03-09T20:20:44.209773+0000","last_clean":"2026-03-09T20:20:44.209773+0000","last_became_active":"2026-03-09T20:20:25.098351+0000","last_became_peered":"2026-03-09T20:20:25.098351+0000","last_unstale":"2026-03-09T20:20:44.209773+0000","last_undegraded":"2026-03-09T20:20:44.209773+0000","last_fullsized":"2026-03-09T20:20:44.209773+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:10:28.660593+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"52'2","reported_seq":36,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.154130+0000","last_change":"2026-03-09T20:20:27.078701+0000","last_active":"2026-03-09T20:20:44.154130+0000","last_peered":"2026-03-09T20:20:44.154130+0000","last_clean":"2026-03-09T20:20:44.154130+0000","last_became_active":"2026-03-09T20:20:25.083178+0000","last_became_peered":"2026-03-09T20:20:25.083178+0000","last_unstale":"2026-03-09T20:20:44.154130+0000","last_undegraded":"2026-03-09T20:20:44.154130+0000","last_fullsized":"2026-03-09T20:20:44.154130+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:33:40.001564+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.001088627,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208350+0000","last_change":"2026-03-09T20:20:29.104121+0000","last_active":"2026-03-09T20:20:44.208350+0000","last_peered":"2026-03-09T20:20:44.208350+0000","last_clean":"2026-03-09T20:20:44.208350+0000","last_became_active":"2026-03-09T20:20:29.103984+0000","last_became_peered":"2026-03-09T20:20:29.103984+0000","last_unstale":"2026-03-09T20:20:44.208350+0000","last_undegraded":"2026-03-09T20:20:44.208350+0000","last_fullsized":"2026-03-09T20:20:44.208350+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:46:46.088540+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209670+0000","last_change":"2026-03-09T20:20:31.119148+0000","last_active":"2026-03-09T20:20:44.209670+0000","last_peered":"2026-03-09T20:20:44.209670+0000","last_clean":"2026-03-09T20:20:44.209670+0000","last_became_active":"2026-03-09T20:20:31.119075+0000","last_became_peered":"2026-03-09T20:20:31.119075+0000","last_unstale":"2026-03-09T20:20:44.209670+0000","last_undegraded":"2026-03-09T20:20:44.209670+0000","last_fullsized":"2026-03-09T20:20:44.209670+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:38:15.482442+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.7","version":"57'13","reported_seq":52,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.212435+0000","last_change":"2026-03-09T20:20:27.096763+0000","last_active":"2026-03-09T20:20:44.212435+0000","last_peered":"2026-03-09T20:20:44.212435+0000","last_clean":"2026-03-09T20:20:44.212435+0000","last_became_active":"2026-03-09T20:20:27.096417+0000","last_became_peered":"2026-03-09T20:20:27.096417+0000","last_unstale":"2026-03-09T20:20:44.212435+0000","last_undegraded":"2026-03-09T20:20:44.212435+0000","last_fullsized":"2026-03-09T20:20:44.212435+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:38:32.682696+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.0","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.212487+0000","last_change":"2026-03-09T20:20:25.082140+0000","last_active":"2026-03-09T20:20:44.212487+0000","last_peered":"2026-03-09T20:20:44.212487+0000","last_clean":"2026-03-09T20:20:44.212487+0000","last_became_active":"2026-03-09T20:20:25.082048+0000","last_became_peered":"2026-03-09T20:20:25.082048+0000","last_unstale":"2026-03-09T20:20:44.212487+0000","last_undegraded":"2026-03-09T20:20:44.212487+0000","last_fullsized":"2026-03-09T20:20:44.212487+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:58:48.354702+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"50'1","reported_seq":35,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.213820+0000","last_change":"2026-03-09T20:20:27.094062+0000","last_active":"2026-03-09T20:20:44.213820+0000","last_peered":"2026-03-09T20:20:44.213820+0000","last_clean":"2026-03-09T20:20:44.213820+0000","last_became_active":"2026-03-09T20:20:25.079956+0000","last_became_peered":"2026-03-09T20:20:25.079956+0000","last_unstale":"2026-03-09T20:20:44.213820+0000","last_undegraded":"2026-03-09T20:20:44.213820+0000","last_fullsized":"2026-03-09T20:20:44.213820+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:58:33.623329+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00043512500000000002,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.213821+0000","last_change":"2026-03-09T20:20:29.115694+0000","last_active":"2026-03-09T20:20:44.213821+0000","last_peered":"2026-03-09T20:20:44.213821+0000","last_clean":"2026-03-09T20:20:44.213821+0000","last_became_active":"2026-03-09T20:20:29.115055+0000","last_became_peered":"2026-03-09T20:20:29.115055+0000","last_unstale":"2026-03-09T20:20:44.213821+0000","last_undegraded":"2026-03-09T20:20:44.213821+0000","last_fullsized":"2026-03-09T20:20:44.213821+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:07:53.361664+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.206899+0000","last_change":"2026-03-09T20:20:31.249059+0000","last_active":"2026-03-09T20:20:44.206899+0000","last_peered":"2026-03-09T20:20:44.206899+0000","last_clean":"2026-03-09T20:20:44.206899+0000","last_became_active":"2026-03-09T20:20:31.248937+0000","last_became_peered":"2026-03-09T20:20:31.248937+0000","last_unstale":"2026-03-09T20:20:44.206899+0000","last_undegraded":"2026-03-09T20:20:44.206899+0000","last_fullsized":"2026-03-09T20:20:44.206899+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:46:55.630278+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.6","version":"57'12","reported_seq":43,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208051+0000","last_change":"2026-03-09T20:20:27.102001+0000","last_active":"2026-03-09T20:20:44.208051+0000","last_peered":"2026-03-09T20:20:44.208051+0000","last_clean":"2026-03-09T20:20:44.208051+0000","last_became_active":"2026-03-09T20:20:27.101896+0000","last_became_peered":"2026-03-09T20:20:27.101896+0000","last_unstale":"2026-03-09T20:20:44.208051+0000","last_undegraded":"2026-03-09T20:20:44.208051+0000","last_fullsized":"2026-03-09T20:20:44.208051+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:37:09.277944+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,2],"acting":[0,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207978+0000","last_change":"2026-03-09T20:20:25.094367+0000","last_active":"2026-03-09T20:20:44.207978+0000","last_peered":"2026-03-09T20:20:44.207978+0000","last_clean":"2026-03-09T20:20:44.207978+0000","last_became_active":"2026-03-09T20:20:25.094296+0000","last_became_peered":"2026-03-09T20:20:25.094296+0000","last_unstale":"2026-03-09T20:20:44.207978+0000","last_undegraded":"2026-03-09T20:20:44.207978+0000","last_fullsized":"2026-03-09T20:20:44.207978+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:58:15.248588+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"57'5","reported_seq":54,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207592+0000","last_change":"2026-03-09T20:20:27.263735+0000","last_active":"2026-03-09T20:20:44.207592+0000","last_peered":"2026-03-09T20:20:44.207592+0000","last_clean":"2026-03-09T20:20:44.207592+0000","last_became_active":"2026-03-09T20:20:25.091079+0000","last_became_peered":"2026-03-09T20:20:25.091079+0000","last_unstale":"2026-03-09T20:20:44.207592+0000","last_undegraded":"2026-03-09T20:20:44.207592+0000","last_fullsized":"2026-03-09T20:20:44.207592+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:45:26.368374+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00033217099999999997,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":17,"num_read_kb":12,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.154078+0000","last_change":"2026-03-09T20:20:29.094765+0000","last_active":"2026-03-09T20:20:44.154078+0000","last_peered":"2026-03-09T20:20:44.154078+0000","last_clean":"2026-03-09T20:20:44.154078+0000","last_became_active":"2026-03-09T20:20:29.094341+0000","last_became_peered":"2026-03-09T20:20:29.094341+0000","last_unstale":"2026-03-09T20:20:44.154078+0000","last_undegraded":"2026-03-09T20:20:44.154078+0000","last_fullsized":"2026-03-09T20:20:44.154078+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:37:12.610037+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"57'1","reported_seq":18,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.212178+0000","last_change":"2026-03-09T20:20:31.119971+0000","last_active":"2026-03-09T20:20:44.212178+0000","last_peered":"2026-03-09T20:20:44.212178+0000","last_clean":"2026-03-09T20:20:44.212178+0000","last_became_active":"2026-03-09T20:20:31.119855+0000","last_became_peered":"2026-03-09T20:20:31.119855+0000","last_unstale":"2026-03-09T20:20:44.212178+0000","last_undegraded":"2026-03-09T20:20:44.212178+0000","last_fullsized":"2026-03-09T20:20:44.212178+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:25:41.792869+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.5","version":"57'16","reported_seq":52,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.160922+0000","last_change":"2026-03-09T20:20:27.266886+0000","last_active":"2026-03-09T20:20:44.160922+0000","last_peered":"2026-03-09T20:20:44.160922+0000","last_clean":"2026-03-09T20:20:44.160922+0000","last_became_active":"2026-03-09T20:20:27.266637+0000","last_became_peered":"2026-03-09T20:20:27.266637+0000","last_unstale":"2026-03-09T20:20:44.160922+0000","last_undegraded":"2026-03-09T20:20:44.160922+0000","last_fullsized":"2026-03-09T20:20:44.160922+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:29:30.343967+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.2","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210415+0000","last_change":"2026-03-09T20:20:25.096348+0000","last_active":"2026-03-09T20:20:44.210415+0000","last_peered":"2026-03-09T20:20:44.210415+0000","last_clean":"2026-03-09T20:20:44.210415+0000","last_became_active":"2026-03-09T20:20:25.096237+0000","last_became_peered":"2026-03-09T20:20:25.096237+0000","last_unstale":"2026-03-09T20:20:44.210415+0000","last_undegraded":"2026-03-09T20:20:44.210415+0000","last_fullsized":"2026-03-09T20:20:44.210415+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:28:03.529580+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"1.0","version":"20'32","reported_seq":39,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207819+0000","last_change":"2026-03-09T20:20:21.963704+0000","last_active":"2026-03-09T20:20:44.207819+0000","last_peered":"2026-03-09T20:20:44.207819+0000","last_clean":"2026-03-09T20:20:44.207819+0000","last_became_active":"2026-03-09T20:20:21.958081+0000","last_became_peered":"2026-03-09T20:20:21.958081+0000","last_unstale":"2026-03-09T20:20:44.207819+0000","last_undegraded":"2026-03-09T20:20:44.207819+0000","last_fullsized":"2026-03-09T20:20:44.207819+0000","mapping_epoch":47,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":48,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:19:21.617349+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:19:21.617349+0000","last_clean_scrub_stamp":"2026-03-09T20:19:21.617349+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:47:48.345378+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207708+0000","last_change":"2026-03-09T20:20:29.112704+0000","last_active":"2026-03-09T20:20:44.207708+0000","last_peered":"2026-03-09T20:20:44.207708+0000","last_clean":"2026-03-09T20:20:44.207708+0000","last_became_active":"2026-03-09T20:20:29.112329+0000","last_became_peered":"2026-03-09T20:20:29.112329+0000","last_unstale":"2026-03-09T20:20:44.207708+0000","last_undegraded":"2026-03-09T20:20:44.207708+0000","last_fullsized":"2026-03-09T20:20:44.207708+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:32:23.055641+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.158390+0000","last_change":"2026-03-09T20:20:31.115212+0000","last_active":"2026-03-09T20:20:44.158390+0000","last_peered":"2026-03-09T20:20:44.158390+0000","last_clean":"2026-03-09T20:20:44.158390+0000","last_became_active":"2026-03-09T20:20:31.115080+0000","last_became_peered":"2026-03-09T20:20:31.115080+0000","last_unstale":"2026-03-09T20:20:44.158390+0000","last_undegraded":"2026-03-09T20:20:44.158390+0000","last_fullsized":"2026-03-09T20:20:44.158390+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:14:58.584249+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.e","version":"57'11","reported_seq":44,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209620+0000","last_change":"2026-03-09T20:20:27.096326+0000","last_active":"2026-03-09T20:20:44.209620+0000","last_peered":"2026-03-09T20:20:44.209620+0000","last_clean":"2026-03-09T20:20:44.209620+0000","last_became_active":"2026-03-09T20:20:27.096073+0000","last_became_peered":"2026-03-09T20:20:27.096073+0000","last_unstale":"2026-03-09T20:20:44.209620+0000","last_undegraded":"2026-03-09T20:20:44.209620+0000","last_fullsized":"2026-03-09T20:20:44.209620+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:40:22.924854+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.9","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209597+0000","last_change":"2026-03-09T20:20:25.091936+0000","last_active":"2026-03-09T20:20:44.209597+0000","last_peered":"2026-03-09T20:20:44.209597+0000","last_clean":"2026-03-09T20:20:44.209597+0000","last_became_active":"2026-03-09T20:20:25.091855+0000","last_became_peered":"2026-03-09T20:20:25.091855+0000","last_unstale":"2026-03-09T20:20:44.209597+0000","last_undegraded":"2026-03-09T20:20:44.209597+0000","last_fullsized":"2026-03-09T20:20:44.209597+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:41:15.976607+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.158630+0000","last_change":"2026-03-09T20:20:29.105137+0000","last_active":"2026-03-09T20:20:44.158630+0000","last_peered":"2026-03-09T20:20:44.158630+0000","last_clean":"2026-03-09T20:20:44.158630+0000","last_became_active":"2026-03-09T20:20:29.105005+0000","last_became_peered":"2026-03-09T20:20:29.105005+0000","last_unstale":"2026-03-09T20:20:44.158630+0000","last_undegraded":"2026-03-09T20:20:44.158630+0000","last_fullsized":"2026-03-09T20:20:44.158630+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:11:26.358855+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209561+0000","last_change":"2026-03-09T20:20:31.250025+0000","last_active":"2026-03-09T20:20:44.209561+0000","last_peered":"2026-03-09T20:20:44.209561+0000","last_clean":"2026-03-09T20:20:44.209561+0000","last_became_active":"2026-03-09T20:20:31.248210+0000","last_became_peered":"2026-03-09T20:20:31.248210+0000","last_unstale":"2026-03-09T20:20:44.209561+0000","last_undegraded":"2026-03-09T20:20:44.209561+0000","last_fullsized":"2026-03-09T20:20:44.209561+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:38:55.858138+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.d","version":"57'17","reported_seq":53,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209988+0000","last_change":"2026-03-09T20:20:27.105201+0000","last_active":"2026-03-09T20:20:44.209988+0000","last_peered":"2026-03-09T20:20:44.209988+0000","last_clean":"2026-03-09T20:20:44.209988+0000","last_became_active":"2026-03-09T20:20:27.104960+0000","last_became_peered":"2026-03-09T20:20:27.104960+0000","last_unstale":"2026-03-09T20:20:44.209988+0000","last_undegraded":"2026-03-09T20:20:44.209988+0000","last_fullsized":"2026-03-09T20:20:44.209988+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:11:39.378462+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,1],"acting":[4,2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.a","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.161146+0000","last_change":"2026-03-09T20:20:25.090144+0000","last_active":"2026-03-09T20:20:44.161146+0000","last_peered":"2026-03-09T20:20:44.161146+0000","last_clean":"2026-03-09T20:20:44.161146+0000","last_became_active":"2026-03-09T20:20:25.090042+0000","last_became_peered":"2026-03-09T20:20:25.090042+0000","last_unstale":"2026-03-09T20:20:44.161146+0000","last_undegraded":"2026-03-09T20:20:44.161146+0000","last_fullsized":"2026-03-09T20:20:44.161146+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:55:48.839114+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.212581+0000","last_change":"2026-03-09T20:20:29.106244+0000","last_active":"2026-03-09T20:20:44.212581+0000","last_peered":"2026-03-09T20:20:44.212581+0000","last_clean":"2026-03-09T20:20:44.212581+0000","last_became_active":"2026-03-09T20:20:29.103942+0000","last_became_peered":"2026-03-09T20:20:29.103942+0000","last_unstale":"2026-03-09T20:20:44.212581+0000","last_undegraded":"2026-03-09T20:20:44.212581+0000","last_fullsized":"2026-03-09T20:20:44.212581+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:42:41.757047+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.213543+0000","last_change":"2026-03-09T20:20:31.129056+0000","last_active":"2026-03-09T20:20:44.213543+0000","last_peered":"2026-03-09T20:20:44.213543+0000","last_clean":"2026-03-09T20:20:44.213543+0000","last_became_active":"2026-03-09T20:20:31.128748+0000","last_became_peered":"2026-03-09T20:20:31.128748+0000","last_unstale":"2026-03-09T20:20:44.213543+0000","last_undegraded":"2026-03-09T20:20:44.213543+0000","last_fullsized":"2026-03-09T20:20:44.213543+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:17:02.646423+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.c","version":"57'10","reported_seq":40,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209488+0000","last_change":"2026-03-09T20:20:27.109846+0000","last_active":"2026-03-09T20:20:44.209488+0000","last_peered":"2026-03-09T20:20:44.209488+0000","last_clean":"2026-03-09T20:20:44.209488+0000","last_became_active":"2026-03-09T20:20:27.109397+0000","last_became_peered":"2026-03-09T20:20:27.109397+0000","last_unstale":"2026-03-09T20:20:44.209488+0000","last_undegraded":"2026-03-09T20:20:44.209488+0000","last_fullsized":"2026-03-09T20:20:44.209488+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:31:20.141251+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,6],"acting":[4,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.b","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210187+0000","last_change":"2026-03-09T20:20:25.087158+0000","last_active":"2026-03-09T20:20:44.210187+0000","last_peered":"2026-03-09T20:20:44.210187+0000","last_clean":"2026-03-09T20:20:44.210187+0000","last_became_active":"2026-03-09T20:20:25.087002+0000","last_became_peered":"2026-03-09T20:20:25.087002+0000","last_unstale":"2026-03-09T20:20:44.210187+0000","last_undegraded":"2026-03-09T20:20:44.210187+0000","last_fullsized":"2026-03-09T20:20:44.210187+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:16:04.769551+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.d","version":"57'8","reported_seq":34,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.213964+0000","last_change":"2026-03-09T20:20:29.114758+0000","last_active":"2026-03-09T20:20:44.213964+0000","last_peered":"2026-03-09T20:20:44.213964+0000","last_clean":"2026-03-09T20:20:44.213964+0000","last_became_active":"2026-03-09T20:20:29.112684+0000","last_became_peered":"2026-03-09T20:20:29.112684+0000","last_unstale":"2026-03-09T20:20:44.213964+0000","last_undegraded":"2026-03-09T20:20:44.213964+0000","last_fullsized":"2026-03-09T20:20:44.213964+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:13:29.745678+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209349+0000","last_change":"2026-03-09T20:20:31.137862+0000","last_active":"2026-03-09T20:20:44.209349+0000","last_peered":"2026-03-09T20:20:44.209349+0000","last_clean":"2026-03-09T20:20:44.209349+0000","last_became_active":"2026-03-09T20:20:31.137762+0000","last_became_peered":"2026-03-09T20:20:31.137762+0000","last_unstale":"2026-03-09T20:20:44.209349+0000","last_undegraded":"2026-03-09T20:20:44.209349+0000","last_fullsized":"2026-03-09T20:20:44.209349+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:53:05.505451+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.b","version":"57'9","reported_seq":41,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208448+0000","last_change":"2026-03-09T20:20:27.099848+0000","last_active":"2026-03-09T20:20:44.208448+0000","last_peered":"2026-03-09T20:20:44.208448+0000","last_clean":"2026-03-09T20:20:44.208448+0000","last_became_active":"2026-03-09T20:20:27.099686+0000","last_became_peered":"2026-03-09T20:20:27.099686+0000","last_unstale":"2026-03-09T20:20:44.208448+0000","last_undegraded":"2026-03-09T20:20:44.208448+0000","last_fullsized":"2026-03-09T20:20:44.208448+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:01:33.616966+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.c","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.154347+0000","last_change":"2026-03-09T20:20:25.093539+0000","last_active":"2026-03-09T20:20:44.154347+0000","last_peered":"2026-03-09T20:20:44.154347+0000","last_clean":"2026-03-09T20:20:44.154347+0000","last_became_active":"2026-03-09T20:20:25.093423+0000","last_became_peered":"2026-03-09T20:20:25.093423+0000","last_unstale":"2026-03-09T20:20:44.154347+0000","last_undegraded":"2026-03-09T20:20:44.154347+0000","last_fullsized":"2026-03-09T20:20:44.154347+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:16:38.124259+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.214062+0000","last_change":"2026-03-09T20:20:29.104549+0000","last_active":"2026-03-09T20:20:44.214062+0000","last_peered":"2026-03-09T20:20:44.214062+0000","last_clean":"2026-03-09T20:20:44.214062+0000","last_became_active":"2026-03-09T20:20:29.104319+0000","last_became_peered":"2026-03-09T20:20:29.104319+0000","last_unstale":"2026-03-09T20:20:44.214062+0000","last_undegraded":"2026-03-09T20:20:44.214062+0000","last_fullsized":"2026-03-09T20:20:44.214062+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:30:05.065234+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208452+0000","last_change":"2026-03-09T20:20:31.126035+0000","last_active":"2026-03-09T20:20:44.208452+0000","last_peered":"2026-03-09T20:20:44.208452+0000","last_clean":"2026-03-09T20:20:44.208452+0000","last_became_active":"2026-03-09T20:20:31.125823+0000","last_became_peered":"2026-03-09T20:20:31.125823+0000","last_unstale":"2026-03-09T20:20:44.208452+0000","last_undegraded":"2026-03-09T20:20:44.208452+0000","last_fullsized":"2026-03-09T20:20:44.208452+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:43:25.953300+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.a","version":"57'19","reported_seq":56,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.161188+0000","last_change":"2026-03-09T20:20:27.266756+0000","last_active":"2026-03-09T20:20:44.161188+0000","last_peered":"2026-03-09T20:20:44.161188+0000","last_clean":"2026-03-09T20:20:44.161188+0000","last_became_active":"2026-03-09T20:20:27.266500+0000","last_became_peered":"2026-03-09T20:20:27.266500+0000","last_unstale":"2026-03-09T20:20:44.161188+0000","last_undegraded":"2026-03-09T20:20:44.161188+0000","last_fullsized":"2026-03-09T20:20:44.161188+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:16:33.096907+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,1,7],"acting":[6,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.d","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207617+0000","last_change":"2026-03-09T20:20:25.095176+0000","last_active":"2026-03-09T20:20:44.207617+0000","last_peered":"2026-03-09T20:20:44.207617+0000","last_clean":"2026-03-09T20:20:44.207617+0000","last_became_active":"2026-03-09T20:20:25.095043+0000","last_became_peered":"2026-03-09T20:20:25.095043+0000","last_unstale":"2026-03-09T20:20:44.207617+0000","last_undegraded":"2026-03-09T20:20:44.207617+0000","last_fullsized":"2026-03-09T20:20:44.207617+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:55:27.069562+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.214009+0000","last_change":"2026-03-09T20:20:29.114918+0000","last_active":"2026-03-09T20:20:44.214009+0000","last_peered":"2026-03-09T20:20:44.214009+0000","last_clean":"2026-03-09T20:20:44.214009+0000","last_became_active":"2026-03-09T20:20:29.113999+0000","last_became_peered":"2026-03-09T20:20:29.113999+0000","last_unstale":"2026-03-09T20:20:44.214009+0000","last_undegraded":"2026-03-09T20:20:44.214009+0000","last_fullsized":"2026-03-09T20:20:44.214009+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:46:58.944192+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207595+0000","last_change":"2026-03-09T20:20:31.129212+0000","last_active":"2026-03-09T20:20:44.207595+0000","last_peered":"2026-03-09T20:20:44.207595+0000","last_clean":"2026-03-09T20:20:44.207595+0000","last_became_active":"2026-03-09T20:20:31.128839+0000","last_became_peered":"2026-03-09T20:20:31.128839+0000","last_unstale":"2026-03-09T20:20:44.207595+0000","last_undegraded":"2026-03-09T20:20:44.207595+0000","last_fullsized":"2026-03-09T20:20:44.207595+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:45:19.431106+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.9","version":"57'12","reported_seq":48,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209850+0000","last_change":"2026-03-09T20:20:27.110559+0000","last_active":"2026-03-09T20:20:44.209850+0000","last_peered":"2026-03-09T20:20:44.209850+0000","last_clean":"2026-03-09T20:20:44.209850+0000","last_became_active":"2026-03-09T20:20:27.110376+0000","last_became_peered":"2026-03-09T20:20:27.110376+0000","last_unstale":"2026-03-09T20:20:44.209850+0000","last_undegraded":"2026-03-09T20:20:44.209850+0000","last_fullsized":"2026-03-09T20:20:44.209850+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:30:09.672458+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,3],"acting":[4,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.e","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207506+0000","last_change":"2026-03-09T20:20:25.086414+0000","last_active":"2026-03-09T20:20:44.207506+0000","last_peered":"2026-03-09T20:20:44.207506+0000","last_clean":"2026-03-09T20:20:44.207506+0000","last_became_active":"2026-03-09T20:20:25.086305+0000","last_became_peered":"2026-03-09T20:20:25.086305+0000","last_unstale":"2026-03-09T20:20:44.207506+0000","last_undegraded":"2026-03-09T20:20:44.207506+0000","last_fullsized":"2026-03-09T20:20:44.207506+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:02:39.048852+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.213865+0000","last_change":"2026-03-09T20:20:29.115627+0000","last_active":"2026-03-09T20:20:44.213865+0000","last_peered":"2026-03-09T20:20:44.213865+0000","last_clean":"2026-03-09T20:20:44.213865+0000","last_became_active":"2026-03-09T20:20:29.113513+0000","last_became_peered":"2026-03-09T20:20:29.113513+0000","last_unstale":"2026-03-09T20:20:44.213865+0000","last_undegraded":"2026-03-09T20:20:44.213865+0000","last_fullsized":"2026-03-09T20:20:44.213865+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:40:18.633043+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209733+0000","last_change":"2026-03-09T20:20:31.118859+0000","last_active":"2026-03-09T20:20:44.209733+0000","last_peered":"2026-03-09T20:20:44.209733+0000","last_clean":"2026-03-09T20:20:44.209733+0000","last_became_active":"2026-03-09T20:20:31.118735+0000","last_became_peered":"2026-03-09T20:20:31.118735+0000","last_unstale":"2026-03-09T20:20:44.209733+0000","last_undegraded":"2026-03-09T20:20:44.209733+0000","last_fullsized":"2026-03-09T20:20:44.209733+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:01:31.828476+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.8","version":"57'15","reported_seq":50,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.153795+0000","last_change":"2026-03-09T20:20:27.265319+0000","last_active":"2026-03-09T20:20:44.153795+0000","last_peered":"2026-03-09T20:20:44.153795+0000","last_clean":"2026-03-09T20:20:44.153795+0000","last_became_active":"2026-03-09T20:20:27.264996+0000","last_became_peered":"2026-03-09T20:20:27.264996+0000","last_unstale":"2026-03-09T20:20:44.153795+0000","last_undegraded":"2026-03-09T20:20:44.153795+0000","last_fullsized":"2026-03-09T20:20:44.153795+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:01:51.760352+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,6],"acting":[5,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.f","version":"50'2","reported_seq":41,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207374+0000","last_change":"2026-03-09T20:20:25.098325+0000","last_active":"2026-03-09T20:20:44.207374+0000","last_peered":"2026-03-09T20:20:44.207374+0000","last_clean":"2026-03-09T20:20:44.207374+0000","last_became_active":"2026-03-09T20:20:25.098233+0000","last_became_peered":"2026-03-09T20:20:25.098233+0000","last_unstale":"2026-03-09T20:20:44.207374+0000","last_undegraded":"2026-03-09T20:20:44.207374+0000","last_fullsized":"2026-03-09T20:20:44.207374+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:13:18.278952+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.9","version":"57'8","reported_seq":34,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207439+0000","last_change":"2026-03-09T20:20:29.106937+0000","last_active":"2026-03-09T20:20:44.207439+0000","last_peered":"2026-03-09T20:20:44.207439+0000","last_clean":"2026-03-09T20:20:44.207439+0000","last_became_active":"2026-03-09T20:20:29.106848+0000","last_became_peered":"2026-03-09T20:20:29.106848+0000","last_unstale":"2026-03-09T20:20:44.207439+0000","last_undegraded":"2026-03-09T20:20:44.207439+0000","last_fullsized":"2026-03-09T20:20:44.207439+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:03:25.586144+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.153731+0000","last_change":"2026-03-09T20:20:31.246979+0000","last_active":"2026-03-09T20:20:44.153731+0000","last_peered":"2026-03-09T20:20:44.153731+0000","last_clean":"2026-03-09T20:20:44.153731+0000","last_became_active":"2026-03-09T20:20:31.246876+0000","last_became_peered":"2026-03-09T20:20:31.246876+0000","last_unstale":"2026-03-09T20:20:44.153731+0000","last_undegraded":"2026-03-09T20:20:44.153731+0000","last_fullsized":"2026-03-09T20:20:44.153731+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:17:18.496845+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.10","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.161120+0000","last_change":"2026-03-09T20:20:25.098042+0000","last_active":"2026-03-09T20:20:44.161120+0000","last_peered":"2026-03-09T20:20:44.161120+0000","last_clean":"2026-03-09T20:20:44.161120+0000","last_became_active":"2026-03-09T20:20:25.097946+0000","last_became_peered":"2026-03-09T20:20:25.097946+0000","last_unstale":"2026-03-09T20:20:44.161120+0000","last_undegraded":"2026-03-09T20:20:44.161120+0000","last_fullsized":"2026-03-09T20:20:44.161120+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:17:11.688866+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.17","version":"57'6","reported_seq":34,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210536+0000","last_change":"2026-03-09T20:20:27.266527+0000","last_active":"2026-03-09T20:20:44.210536+0000","last_peered":"2026-03-09T20:20:44.210536+0000","last_clean":"2026-03-09T20:20:44.210536+0000","last_became_active":"2026-03-09T20:20:27.266347+0000","last_became_peered":"2026-03-09T20:20:27.266347+0000","last_unstale":"2026-03-09T20:20:44.210536+0000","last_undegraded":"2026-03-09T20:20:44.210536+0000","last_fullsized":"2026-03-09T20:20:44.210536+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:36:16.910316+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.154176+0000","last_change":"2026-03-09T20:20:29.107531+0000","last_active":"2026-03-09T20:20:44.154176+0000","last_peered":"2026-03-09T20:20:44.154176+0000","last_clean":"2026-03-09T20:20:44.154176+0000","last_became_active":"2026-03-09T20:20:29.107451+0000","last_became_peered":"2026-03-09T20:20:29.107451+0000","last_unstale":"2026-03-09T20:20:44.154176+0000","last_undegraded":"2026-03-09T20:20:44.154176+0000","last_fullsized":"2026-03-09T20:20:44.154176+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:42:43.525097+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208011+0000","last_change":"2026-03-09T20:20:31.249301+0000","last_active":"2026-03-09T20:20:44.208011+0000","last_peered":"2026-03-09T20:20:44.208011+0000","last_clean":"2026-03-09T20:20:44.208011+0000","last_became_active":"2026-03-09T20:20:31.249141+0000","last_became_peered":"2026-03-09T20:20:31.249141+0000","last_unstale":"2026-03-09T20:20:44.208011+0000","last_undegraded":"2026-03-09T20:20:44.208011+0000","last_fullsized":"2026-03-09T20:20:44.208011+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:52:06.940777+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.16","version":"57'9","reported_seq":41,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208503+0000","last_change":"2026-03-09T20:20:27.265496+0000","last_active":"2026-03-09T20:20:44.208503+0000","last_peered":"2026-03-09T20:20:44.208503+0000","last_clean":"2026-03-09T20:20:44.208503+0000","last_became_active":"2026-03-09T20:20:27.265005+0000","last_became_peered":"2026-03-09T20:20:27.265005+0000","last_unstale":"2026-03-09T20:20:44.208503+0000","last_undegraded":"2026-03-09T20:20:44.208503+0000","last_fullsized":"2026-03-09T20:20:44.208503+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:04:54.624112+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,7],"acting":[0,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.11","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207325+0000","last_change":"2026-03-09T20:20:25.084393+0000","last_active":"2026-03-09T20:20:44.207325+0000","last_peered":"2026-03-09T20:20:44.207325+0000","last_clean":"2026-03-09T20:20:44.207325+0000","last_became_active":"2026-03-09T20:20:25.084160+0000","last_became_peered":"2026-03-09T20:20:25.084160+0000","last_unstale":"2026-03-09T20:20:44.207325+0000","last_undegraded":"2026-03-09T20:20:44.207325+0000","last_fullsized":"2026-03-09T20:20:44.207325+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:06:43.407991+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209974+0000","last_change":"2026-03-09T20:20:29.113366+0000","last_active":"2026-03-09T20:20:44.209974+0000","last_peered":"2026-03-09T20:20:44.209974+0000","last_clean":"2026-03-09T20:20:44.209974+0000","last_became_active":"2026-03-09T20:20:29.111816+0000","last_became_peered":"2026-03-09T20:20:29.111816+0000","last_unstale":"2026-03-09T20:20:44.209974+0000","last_undegraded":"2026-03-09T20:20:44.209974+0000","last_fullsized":"2026-03-09T20:20:44.209974+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:11:17.014667+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.214040+0000","last_change":"2026-03-09T20:20:31.128964+0000","last_active":"2026-03-09T20:20:44.214040+0000","last_peered":"2026-03-09T20:20:44.214040+0000","last_clean":"2026-03-09T20:20:44.214040+0000","last_became_active":"2026-03-09T20:20:31.128865+0000","last_became_peered":"2026-03-09T20:20:31.128865+0000","last_unstale":"2026-03-09T20:20:44.214040+0000","last_undegraded":"2026-03-09T20:20:44.214040+0000","last_fullsized":"2026-03-09T20:20:44.214040+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:05:30.720200+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.15","version":"57'9","reported_seq":41,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.158595+0000","last_change":"2026-03-09T20:20:27.265409+0000","last_active":"2026-03-09T20:20:44.158595+0000","last_peered":"2026-03-09T20:20:44.158595+0000","last_clean":"2026-03-09T20:20:44.158595+0000","last_became_active":"2026-03-09T20:20:27.265170+0000","last_became_peered":"2026-03-09T20:20:27.265170+0000","last_unstale":"2026-03-09T20:20:44.158595+0000","last_undegraded":"2026-03-09T20:20:44.158595+0000","last_fullsized":"2026-03-09T20:20:44.158595+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:31:47.493959+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,3],"acting":[5,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.12","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208037+0000","last_change":"2026-03-09T20:20:25.086850+0000","last_active":"2026-03-09T20:20:44.208037+0000","last_peered":"2026-03-09T20:20:44.208037+0000","last_clean":"2026-03-09T20:20:44.208037+0000","last_became_active":"2026-03-09T20:20:25.086363+0000","last_became_peered":"2026-03-09T20:20:25.086363+0000","last_unstale":"2026-03-09T20:20:44.208037+0000","last_undegraded":"2026-03-09T20:20:44.208037+0000","last_fullsized":"2026-03-09T20:20:44.208037+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:27:09.987101+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"57'8","reported_seq":34,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210243+0000","last_change":"2026-03-09T20:20:29.109278+0000","last_active":"2026-03-09T20:20:44.210243+0000","last_peered":"2026-03-09T20:20:44.210243+0000","last_clean":"2026-03-09T20:20:44.210243+0000","last_became_active":"2026-03-09T20:20:29.109197+0000","last_became_peered":"2026-03-09T20:20:29.109197+0000","last_unstale":"2026-03-09T20:20:44.210243+0000","last_undegraded":"2026-03-09T20:20:44.210243+0000","last_fullsized":"2026-03-09T20:20:44.210243+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:12:17.912883+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208790+0000","last_change":"2026-03-09T20:20:31.125830+0000","last_active":"2026-03-09T20:20:44.208790+0000","last_peered":"2026-03-09T20:20:44.208790+0000","last_clean":"2026-03-09T20:20:44.208790+0000","last_became_active":"2026-03-09T20:20:31.125580+0000","last_became_peered":"2026-03-09T20:20:31.125580+0000","last_unstale":"2026-03-09T20:20:44.208790+0000","last_undegraded":"2026-03-09T20:20:44.208790+0000","last_fullsized":"2026-03-09T20:20:44.208790+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:20:16.253104+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.14","version":"57'10","reported_seq":40,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210802+0000","last_change":"2026-03-09T20:20:27.266068+0000","last_active":"2026-03-09T20:20:44.210802+0000","last_peered":"2026-03-09T20:20:44.210802+0000","last_clean":"2026-03-09T20:20:44.210802+0000","last_became_active":"2026-03-09T20:20:27.265928+0000","last_became_peered":"2026-03-09T20:20:27.265928+0000","last_unstale":"2026-03-09T20:20:44.210802+0000","last_undegraded":"2026-03-09T20:20:44.210802+0000","last_fullsized":"2026-03-09T20:20:44.210802+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:43:39.550684+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.13","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207238+0000","last_change":"2026-03-09T20:20:25.084254+0000","last_active":"2026-03-09T20:20:44.207238+0000","last_peered":"2026-03-09T20:20:44.207238+0000","last_clean":"2026-03-09T20:20:44.207238+0000","last_became_active":"2026-03-09T20:20:25.083906+0000","last_became_peered":"2026-03-09T20:20:25.083906+0000","last_unstale":"2026-03-09T20:20:44.207238+0000","last_undegraded":"2026-03-09T20:20:44.207238+0000","last_fullsized":"2026-03-09T20:20:44.207238+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:17:29.731372+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.15","version":"57'8","reported_seq":34,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.158573+0000","last_change":"2026-03-09T20:20:29.094965+0000","last_active":"2026-03-09T20:20:44.158573+0000","last_peered":"2026-03-09T20:20:44.158573+0000","last_clean":"2026-03-09T20:20:44.158573+0000","last_became_active":"2026-03-09T20:20:29.094425+0000","last_became_peered":"2026-03-09T20:20:29.094425+0000","last_unstale":"2026-03-09T20:20:44.158573+0000","last_undegraded":"2026-03-09T20:20:44.158573+0000","last_fullsized":"2026-03-09T20:20:44.158573+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:57:35.559552+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207835+0000","last_change":"2026-03-09T20:20:31.116948+0000","last_active":"2026-03-09T20:20:44.207835+0000","last_peered":"2026-03-09T20:20:44.207835+0000","last_clean":"2026-03-09T20:20:44.207835+0000","last_became_active":"2026-03-09T20:20:31.116819+0000","last_became_peered":"2026-03-09T20:20:31.116819+0000","last_unstale":"2026-03-09T20:20:44.207835+0000","last_undegraded":"2026-03-09T20:20:44.207835+0000","last_fullsized":"2026-03-09T20:20:44.207835+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:27:25.647223+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.13","version":"57'11","reported_seq":44,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209546+0000","last_change":"2026-03-09T20:20:27.096425+0000","last_active":"2026-03-09T20:20:44.209546+0000","last_peered":"2026-03-09T20:20:44.209546+0000","last_clean":"2026-03-09T20:20:44.209546+0000","last_became_active":"2026-03-09T20:20:27.096362+0000","last_became_peered":"2026-03-09T20:20:27.096362+0000","last_unstale":"2026-03-09T20:20:44.209546+0000","last_undegraded":"2026-03-09T20:20:44.209546+0000","last_fullsized":"2026-03-09T20:20:44.209546+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:06:42.717186+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.14","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209531+0000","last_change":"2026-03-09T20:20:25.091685+0000","last_active":"2026-03-09T20:20:44.209531+0000","last_peered":"2026-03-09T20:20:44.209531+0000","last_clean":"2026-03-09T20:20:44.209531+0000","last_became_active":"2026-03-09T20:20:25.091601+0000","last_became_peered":"2026-03-09T20:20:25.091601+0000","last_unstale":"2026-03-09T20:20:44.209531+0000","last_undegraded":"2026-03-09T20:20:44.209531+0000","last_fullsized":"2026-03-09T20:20:44.209531+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:20:54.457876+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.211875+0000","last_change":"2026-03-09T20:20:29.103133+0000","last_active":"2026-03-09T20:20:44.211875+0000","last_peered":"2026-03-09T20:20:44.211875+0000","last_clean":"2026-03-09T20:20:44.211875+0000","last_became_active":"2026-03-09T20:20:29.103014+0000","last_became_peered":"2026-03-09T20:20:29.103014+0000","last_unstale":"2026-03-09T20:20:44.211875+0000","last_undegraded":"2026-03-09T20:20:44.211875+0000","last_fullsized":"2026-03-09T20:20:44.211875+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T06:43:25.099989+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209584+0000","last_change":"2026-03-09T20:20:31.140840+0000","last_active":"2026-03-09T20:20:44.209584+0000","last_peered":"2026-03-09T20:20:44.209584+0000","last_clean":"2026-03-09T20:20:44.209584+0000","last_became_active":"2026-03-09T20:20:31.140725+0000","last_became_peered":"2026-03-09T20:20:31.140725+0000","last_unstale":"2026-03-09T20:20:44.209584+0000","last_undegraded":"2026-03-09T20:20:44.209584+0000","last_fullsized":"2026-03-09T20:20:44.209584+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:43:39.655734+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.12","version":"57'9","reported_seq":41,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.212047+0000","last_change":"2026-03-09T20:20:27.103957+0000","last_active":"2026-03-09T20:20:44.212047+0000","last_peered":"2026-03-09T20:20:44.212047+0000","last_clean":"2026-03-09T20:20:44.212047+0000","last_became_active":"2026-03-09T20:20:27.103836+0000","last_became_peered":"2026-03-09T20:20:27.103836+0000","last_unstale":"2026-03-09T20:20:44.212047+0000","last_undegraded":"2026-03-09T20:20:44.212047+0000","last_fullsized":"2026-03-09T20:20:44.212047+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:47:23.688947+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.15","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207191+0000","last_change":"2026-03-09T20:20:25.084326+0000","last_active":"2026-03-09T20:20:44.207191+0000","last_peered":"2026-03-09T20:20:44.207191+0000","last_clean":"2026-03-09T20:20:44.207191+0000","last_became_active":"2026-03-09T20:20:25.084031+0000","last_became_peered":"2026-03-09T20:20:25.084031+0000","last_unstale":"2026-03-09T20:20:44.207191+0000","last_undegraded":"2026-03-09T20:20:44.207191+0000","last_fullsized":"2026-03-09T20:20:44.207191+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:04:18.400605+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.209948+0000","last_change":"2026-03-09T20:20:29.113095+0000","last_active":"2026-03-09T20:20:44.209948+0000","last_peered":"2026-03-09T20:20:44.209948+0000","last_clean":"2026-03-09T20:20:44.209948+0000","last_became_active":"2026-03-09T20:20:29.111138+0000","last_became_peered":"2026-03-09T20:20:29.111138+0000","last_unstale":"2026-03-09T20:20:44.209948+0000","last_undegraded":"2026-03-09T20:20:44.209948+0000","last_fullsized":"2026-03-09T20:20:44.209948+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T02:01:28.618150+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207765+0000","last_change":"2026-03-09T20:20:31.103376+0000","last_active":"2026-03-09T20:20:44.207765+0000","last_peered":"2026-03-09T20:20:44.207765+0000","last_clean":"2026-03-09T20:20:44.207765+0000","last_became_active":"2026-03-09T20:20:31.103147+0000","last_became_peered":"2026-03-09T20:20:31.103147+0000","last_unstale":"2026-03-09T20:20:44.207765+0000","last_undegraded":"2026-03-09T20:20:44.207765+0000","last_fullsized":"2026-03-09T20:20:44.207765+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:04:21.010486+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.11","version":"57'11","reported_seq":44,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210481+0000","last_change":"2026-03-09T20:20:27.266748+0000","last_active":"2026-03-09T20:20:44.210481+0000","last_peered":"2026-03-09T20:20:44.210481+0000","last_clean":"2026-03-09T20:20:44.210481+0000","last_became_active":"2026-03-09T20:20:27.266195+0000","last_became_peered":"2026-03-09T20:20:27.266195+0000","last_unstale":"2026-03-09T20:20:44.210481+0000","last_undegraded":"2026-03-09T20:20:44.210481+0000","last_fullsized":"2026-03-09T20:20:44.210481+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:30:46.079870+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.16","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.154104+0000","last_change":"2026-03-09T20:20:25.087173+0000","last_active":"2026-03-09T20:20:44.154104+0000","last_peered":"2026-03-09T20:20:44.154104+0000","last_clean":"2026-03-09T20:20:44.154104+0000","last_became_active":"2026-03-09T20:20:25.086936+0000","last_became_peered":"2026-03-09T20:20:25.086936+0000","last_unstale":"2026-03-09T20:20:44.154104+0000","last_undegraded":"2026-03-09T20:20:44.154104+0000","last_fullsized":"2026-03-09T20:20:44.154104+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:04:33.752857+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207836+0000","last_change":"2026-03-09T20:20:29.106431+0000","last_active":"2026-03-09T20:20:44.207836+0000","last_peered":"2026-03-09T20:20:44.207836+0000","last_clean":"2026-03-09T20:20:44.207836+0000","last_became_active":"2026-03-09T20:20:29.106358+0000","last_became_peered":"2026-03-09T20:20:29.106358+0000","last_unstale":"2026-03-09T20:20:44.207836+0000","last_undegraded":"2026-03-09T20:20:44.207836+0000","last_fullsized":"2026-03-09T20:20:44.207836+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:02:17.904208+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210454+0000","last_change":"2026-03-09T20:20:31.250138+0000","last_active":"2026-03-09T20:20:44.210454+0000","last_peered":"2026-03-09T20:20:44.210454+0000","last_clean":"2026-03-09T20:20:44.210454+0000","last_became_active":"2026-03-09T20:20:31.249706+0000","last_became_peered":"2026-03-09T20:20:31.249706+0000","last_unstale":"2026-03-09T20:20:44.210454+0000","last_undegraded":"2026-03-09T20:20:44.210454+0000","last_fullsized":"2026-03-09T20:20:44.210454+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:54:26.661586+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.10","version":"57'4","reported_seq":31,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210047+0000","last_change":"2026-03-09T20:20:27.103435+0000","last_active":"2026-03-09T20:20:44.210047+0000","last_peered":"2026-03-09T20:20:44.210047+0000","last_clean":"2026-03-09T20:20:44.210047+0000","last_became_active":"2026-03-09T20:20:27.100968+0000","last_became_peered":"2026-03-09T20:20:27.100968+0000","last_unstale":"2026-03-09T20:20:44.210047+0000","last_undegraded":"2026-03-09T20:20:44.210047+0000","last_fullsized":"2026-03-09T20:20:44.210047+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:24:36.710784+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,6],"acting":[3,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207903+0000","last_change":"2026-03-09T20:20:25.085574+0000","last_active":"2026-03-09T20:20:44.207903+0000","last_peered":"2026-03-09T20:20:44.207903+0000","last_clean":"2026-03-09T20:20:44.207903+0000","last_became_active":"2026-03-09T20:20:25.085421+0000","last_became_peered":"2026-03-09T20:20:25.085421+0000","last_unstale":"2026-03-09T20:20:44.207903+0000","last_undegraded":"2026-03-09T20:20:44.207903+0000","last_fullsized":"2026-03-09T20:20:44.207903+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:17:01.205678+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.161400+0000","last_change":"2026-03-09T20:20:29.107045+0000","last_active":"2026-03-09T20:20:44.161400+0000","last_peered":"2026-03-09T20:20:44.161400+0000","last_clean":"2026-03-09T20:20:44.161400+0000","last_became_active":"2026-03-09T20:20:29.106714+0000","last_became_peered":"2026-03-09T20:20:29.106714+0000","last_unstale":"2026-03-09T20:20:44.161400+0000","last_undegraded":"2026-03-09T20:20:44.161400+0000","last_fullsized":"2026-03-09T20:20:44.161400+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:00:34.495981+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.207668+0000","last_change":"2026-03-09T20:20:31.129332+0000","last_active":"2026-03-09T20:20:44.207668+0000","last_peered":"2026-03-09T20:20:44.207668+0000","last_clean":"2026-03-09T20:20:44.207668+0000","last_became_active":"2026-03-09T20:20:31.129083+0000","last_became_peered":"2026-03-09T20:20:31.129083+0000","last_unstale":"2026-03-09T20:20:44.207668+0000","last_undegraded":"2026-03-09T20:20:44.207668+0000","last_fullsized":"2026-03-09T20:20:44.207668+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:28:01.826546+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":17,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.212681+0000","last_change":"2026-03-09T20:20:31.135825+0000","last_active":"2026-03-09T20:20:44.212681+0000","last_peered":"2026-03-09T20:20:44.212681+0000","last_clean":"2026-03-09T20:20:44.212681+0000","last_became_active":"2026-03-09T20:20:31.135740+0000","last_became_peered":"2026-03-09T20:20:31.135740+0000","last_unstale":"2026-03-09T20:20:44.212681+0000","last_undegraded":"2026-03-09T20:20:44.212681+0000","last_fullsized":"2026-03-09T20:20:44.212681+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:30.082663+0000","last_clean_scrub_stamp":"2026-03-09T20:20:30.082663+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:32:00.544718+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"0'0","reported_seq":29,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.210354+0000","last_change":"2026-03-09T20:20:25.089048+0000","last_active":"2026-03-09T20:20:44.210354+0000","last_peered":"2026-03-09T20:20:44.210354+0000","last_clean":"2026-03-09T20:20:44.210354+0000","last_became_active":"2026-03-09T20:20:25.088972+0000","last_became_peered":"2026-03-09T20:20:25.088972+0000","last_unstale":"2026-03-09T20:20:44.210354+0000","last_undegraded":"2026-03-09T20:20:44.210354+0000","last_fullsized":"2026-03-09T20:20:44.210354+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:24.048536+0000","last_clean_scrub_stamp":"2026-03-09T20:20:24.048536+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:01:23.796161+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.1f","version":"57'11","reported_seq":44,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.161233+0000","last_change":"2026-03-09T20:20:27.095446+0000","last_active":"2026-03-09T20:20:44.161233+0000","last_peered":"2026-03-09T20:20:44.161233+0000","last_clean":"2026-03-09T20:20:44.161233+0000","last_became_active":"2026-03-09T20:20:27.095246+0000","last_became_peered":"2026-03-09T20:20:27.095246+0000","last_unstale":"2026-03-09T20:20:44.161233+0000","last_undegraded":"2026-03-09T20:20:44.161233+0000","last_fullsized":"2026-03-09T20:20:44.161233+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:26.062853+0000","last_clean_scrub_stamp":"2026-03-09T20:20:26.062853+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T04:08:18.227942+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,1],"acting":[6,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":21,"reported_epoch":59,"state":"active+clean","last_fresh":"2026-03-09T20:20:44.208197+0000","last_change":"2026-03-09T20:20:29.104054+0000","last_active":"2026-03-09T20:20:44.208197+0000","last_peered":"2026-03-09T20:20:44.208197+0000","last_clean":"2026-03-09T20:20:44.208197+0000","last_became_active":"2026-03-09T20:20:29.103791+0000","last_became_peered":"2026-03-09T20:20:29.103791+0000","last_unstale":"2026-03-09T20:20:44.208197+0000","last_undegraded":"2026-03-09T20:20:44.208197+0000","last_fullsized":"2026-03-09T20:20:44.208197+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:20:28.074724+0000","last_clean_scrub_stamp":"2026-03-09T20:20:28.074724+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T05:32:12.585603+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":64,"ondisk_log_size":64,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":698,"num_read_kb":455,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":393,"ondisk_log_size":393,"up":96,"acting":96,"num_store_stats":8},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":24,"num_read_kb":24,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":17,"num_read_kb":12,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":46,"seq":197568495622,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27876,"kb_used_data":1036,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939548,"statfs":{"total":21470642176,"available":21442097152,"internally_reserved":0,"allocated":1060864,"data_stored":686456,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":41,"seq":176093659145,"num_pgs":43,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27856,"kb_used_data":1016,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939568,"statfs":{"total":21470642176,"available":21442117632,"internally_reserved":0,"allocated":1040384,"data_stored":685371,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":2,"apply_latency_ms":2,"commit_latency_ns":2000000,"apply_latency_ns":2000000},"alerts":[]},{"osd":5,"up_from":36,"seq":154618822667,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27420,"kb_used_data":580,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940004,"statfs":{"total":21470642176,"available":21442564096,"internally_reserved":0,"allocated":593920,"data_stored":227892,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":30,"seq":128849018894,"num_pgs":58,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27432,"kb_used_data":592,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939992,"statfs":{"total":21470642176,"available":21442551808,"internally_reserved":0,"allocated":606208,"data_stored":226696,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]},{"osd":3,"up_from":25,"seq":107374182416,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27444,"kb_used_data":604,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939980,"statfs":{"total":21470642176,"available":21442539520,"internally_reserved":0,"allocated":618496,"data_stored":226995,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":3,"apply_latency_ms":3,"commit_latency_ns":3000000,"apply_latency_ns":3000000},"alerts":[]},{"osd":2,"up_from":17,"seq":73014444050,"num_pgs":36,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27408,"kb_used_data":568,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940016,"statfs":{"total":21470642176,"available":21442576384,"internally_reserved":0,"allocated":581632,"data_stored":227668,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":5,"apply_latency_ms":5,"commit_latency_ns":5000000,"apply_latency_ns":5000000},"alerts":[]},{"osd":1,"up_from":12,"seq":51539607573,"num_pgs":57,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27476,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939948,"statfs":{"total":21470642176,"available":21442506752,"internally_reserved":0,"allocated":651264,"data_stored":228325,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":4,"apply_latency_ms":4,"commit_latency_ns":4000000,"apply_latency_ns":4000000},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738391,"num_pgs":46,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27880,"kb_used_data":1044,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939544,"statfs":{"total":21470642176,"available":21442093056,"internally_reserved":0,"allocated":1069056,"data_stored":687158,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1131,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":528,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":1429,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":184,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":1429,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":90112,"data_stored":2338,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":32768,"data_stored":798,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1898,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":53248,"data_stored":1474,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":990,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":1034,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1254,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T20:20:46.066 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T20:20:46.066 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T20:20:46.066 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T20:20:46.066 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph health --format=json 2026-03-09T20:20:46.319 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: [09/Mar/2026:20:20:45] ENGINE Bus STARTING 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: [09/Mar/2026:20:20:45] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: mgrmap e20: y(active, since 1.03972s), standbys: x 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='client.24608 v1:192.168.123.105:0/2149218065' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: [09/Mar/2026:20:20:45] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: [09/Mar/2026:20:20:45] ENGINE Bus STARTED 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: [09/Mar/2026:20:20:45] ENGINE Client ('192.168.123.105', 60206) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: Updating vm09:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: [09/Mar/2026:20:20:45] ENGINE Bus STARTING 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: [09/Mar/2026:20:20:45] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: mgrmap e20: y(active, since 1.03972s), standbys: x 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='client.24608 v1:192.168.123.105:0/2149218065' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: [09/Mar/2026:20:20:45] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: [09/Mar/2026:20:20:45] ENGINE Bus STARTED 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: [09/Mar/2026:20:20:45] ENGINE Client ('192.168.123.105', 60206) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:20:46.351 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: Updating vm09:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.352 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:46 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.473 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.473+0000 7f19f0328640 1 -- 192.168.123.105:0/224020947 >> v1:192.168.123.105:6789/0 conn(0x7f19e811a770 legacy=0x7f19e811cb60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:46.473 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.474+0000 7f19f0328640 1 -- 192.168.123.105:0/224020947 shutdown_connections 2026-03-09T20:20:46.473 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.474+0000 7f19f0328640 1 -- 192.168.123.105:0/224020947 >> 192.168.123.105:0/224020947 conn(0x7f19e806e900 msgr2=0x7f19e806ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:46.473 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.474+0000 7f19f0328640 1 -- 192.168.123.105:0/224020947 shutdown_connections 2026-03-09T20:20:46.474 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.474+0000 7f19f0328640 1 -- 192.168.123.105:0/224020947 wait complete. 2026-03-09T20:20:46.474 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.474+0000 7f19f0328640 1 Processor -- start 2026-03-09T20:20:46.474 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.475+0000 7f19f0328640 1 -- start start 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.475+0000 7f19f0328640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f19e81c1010 con 0x7f19e8074230 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.475+0000 7f19f0328640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f19e81c2210 con 0x7f19e811e280 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.475+0000 7f19f0328640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f19e81c3410 con 0x7f19e811a770 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.475+0000 7f19ee89e640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f19e811e280 0x7f19e81bf7b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:56780/0 (socket says 192.168.123.105:56780) 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.475+0000 7f19ee89e640 1 -- 192.168.123.105:0/77145268 learned_addr learned my addr 192.168.123.105:0/77145268 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.475+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 855175075 0 0) 0x7f19e81c1010 con 0x7f19e8074230 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.475+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f19c8003260 con 0x7f19e8074230 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.475+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1394671071 0 0) 0x7f19e81c2210 con 0x7f19e811e280 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.475+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f19e81c1010 con 0x7f19e811e280 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.476+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1792031904 0 0) 0x7f19e81c1010 con 0x7f19e811e280 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.476+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f19e81c2210 con 0x7f19e811e280 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.476+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f19dc002ef0 con 0x7f19e811e280 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.476+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 259634400 0 0) 0x7f19c8003260 con 0x7f19e8074230 2026-03-09T20:20:46.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.476+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f19e81c1010 con 0x7f19e8074230 2026-03-09T20:20:46.476 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.476+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 204237896 0 0) 0x7f19e81c2210 con 0x7f19e811e280 2026-03-09T20:20:46.476 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.476+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 >> v1:192.168.123.105:6790/0 conn(0x7f19e811a770 legacy=0x7f19e8118db0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:46.476 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.476+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 >> v1:192.168.123.105:6789/0 conn(0x7f19e8074230 legacy=0x7f19e810e240 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:46.476 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.476+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f19e81c4610 con 0x7f19e811e280 2026-03-09T20:20:46.476 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.476+0000 7f19f0328640 1 -- 192.168.123.105:0/77145268 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f19e81c2440 con 0x7f19e811e280 2026-03-09T20:20:46.476 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.476+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f19dc003500 con 0x7f19e811e280 2026-03-09T20:20:46.476 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.476+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f19dc005a30 con 0x7f19e811e280 2026-03-09T20:20:46.476 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.476+0000 7f19f0328640 1 -- 192.168.123.105:0/77145268 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f19e81c2a70 con 0x7f19e811e280 2026-03-09T20:20:46.477 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.477+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 20) ==== 99806+0+0 (unknown 3641764485 0 0) 0x7f19dc01e1c0 con 0x7f19e811e280 2026-03-09T20:20:46.477 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.477+0000 7f19f0328640 1 -- 192.168.123.105:0/77145268 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f19e810a470 con 0x7f19e811e280 2026-03-09T20:20:46.479 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.478+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(59..59 src has 1..59) ==== 6152+0+0 (unknown 1608023118 0 0) 0x7f19dc094980 con 0x7f19e811e280 2026-03-09T20:20:46.480 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.481+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f19dc05ea70 con 0x7f19e811e280 2026-03-09T20:20:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: [09/Mar/2026:20:20:45] ENGINE Bus STARTING 2026-03-09T20:20:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: [09/Mar/2026:20:20:45] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T20:20:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: mgrmap e20: y(active, since 1.03972s), standbys: x 2026-03-09T20:20:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:20:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='client.24608 v1:192.168.123.105:0/2149218065' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:20:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: [09/Mar/2026:20:20:45] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T20:20:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: [09/Mar/2026:20:20:45] ENGINE Bus STARTED 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: [09/Mar/2026:20:20:45] ENGINE Client ('192.168.123.105', 60206) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: Updating vm09:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:46 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:46.594 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.594+0000 7f19f0328640 1 -- 192.168.123.105:0/77145268 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "health", "format": "json"} v 0) -- 0x7f19e81250b0 con 0x7f19e811e280 2026-03-09T20:20:46.595 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:20:46.595 INFO:teuthology.orchestra.run.vm05.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T20:20:46.595 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.595+0000 7f19d77fe640 1 -- 192.168.123.105:0/77145268 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "health", "format": "json"}]=0 v0) ==== 72+0+46 (unknown 3730786198 0 4185958460) 0x7f19dc05e370 con 0x7f19e811e280 2026-03-09T20:20:46.598 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.598+0000 7f19f0328640 1 -- 192.168.123.105:0/77145268 >> v1:192.168.123.105:6800/1903060503 conn(0x7f19c80780d0 legacy=0x7f19c807a590 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:46.598 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.598+0000 7f19f0328640 1 -- 192.168.123.105:0/77145268 >> v1:192.168.123.109:6789/0 conn(0x7f19e811e280 legacy=0x7f19e81bf7b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:20:46.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.599+0000 7f19f0328640 1 -- 192.168.123.105:0/77145268 shutdown_connections 2026-03-09T20:20:46.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.599+0000 7f19f0328640 1 -- 192.168.123.105:0/77145268 >> 192.168.123.105:0/77145268 conn(0x7f19e806e900 msgr2=0x7f19e8072bd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:20:46.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.599+0000 7f19f0328640 1 -- 192.168.123.105:0/77145268 shutdown_connections 2026-03-09T20:20:46.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T20:20:46.599+0000 7f19f0328640 1 -- 192.168.123.105:0/77145268 wait complete. 2026-03-09T20:20:46.746 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T20:20:46.746 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T20:20:46.746 INFO:teuthology.run_tasks:Running task workunit... 2026-03-09T20:20:46.750 INFO:tasks.workunit:Pulling workunits from ref 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T20:20:46.750 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-09T20:20:46.751 DEBUG:teuthology.orchestra.run.vm05:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-09T20:20:46.766 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:20:46.766 INFO:teuthology.orchestra.run.vm05.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-09T20:20:46.766 DEBUG:teuthology.orchestra.run.vm05:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T20:20:46.823 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-09T20:20:46.823 DEBUG:teuthology.orchestra.run.vm05:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-09T20:20:46.888 INFO:tasks.workunit:timeout=3h 2026-03-09T20:20:46.889 INFO:tasks.workunit:cleanup=True 2026-03-09T20:20:46.889 DEBUG:teuthology.orchestra.run.vm05:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T20:20:46.944 INFO:tasks.workunit.client.0.vm05.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-09T20:20:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:47 vm09 ceph-mon[54524]: Updating vm05:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:20:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:47 vm09 ceph-mon[54524]: from='client.24632 v1:192.168.123.105:0/4109364722' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:20:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:47 vm09 ceph-mon[54524]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:20:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:47 vm09 ceph-mon[54524]: Updating vm09:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.client.admin.keyring 2026-03-09T20:20:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:47 vm09 ceph-mon[54524]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:20:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:47 vm09 ceph-mon[54524]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:47 vm09 ceph-mon[54524]: Updating vm05:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.client.admin.keyring 2026-03-09T20:20:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:47 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:47 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:47 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:47 vm09 ceph-mon[54524]: Deploying daemon alertmanager.a on vm05 2026-03-09T20:20:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/77145268' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T20:20:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:47 vm09 ceph-mon[54524]: mgrmap e21: y(active, since 2s), standbys: x 2026-03-09T20:20:47.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[61345]: Updating vm05:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:20:47.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[61345]: from='client.24632 v1:192.168.123.105:0/4109364722' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:20:47.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[61345]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:20:47.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[61345]: Updating vm09:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.client.admin.keyring 2026-03-09T20:20:47.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[61345]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:20:47.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[61345]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:47.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[61345]: Updating vm05:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.client.admin.keyring 2026-03-09T20:20:47.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:47.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:47.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[61345]: Deploying daemon alertmanager.a on vm05 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/77145268' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[61345]: mgrmap e21: y(active, since 2s), standbys: x 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[51870]: Updating vm05:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.conf 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[51870]: from='client.24632 v1:192.168.123.105:0/4109364722' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[51870]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[51870]: Updating vm09:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.client.admin.keyring 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[51870]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[51870]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[51870]: Updating vm05:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/config/ceph.client.admin.keyring 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[51870]: Deploying daemon alertmanager.a on vm05 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/77145268' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T20:20:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:47 vm05 ceph-mon[51870]: mgrmap e21: y(active, since 2s), standbys: x 2026-03-09T20:20:48.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:20:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:20:49.544 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:49 vm05 ceph-mon[51870]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:49.544 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:49 vm05 ceph-mon[51870]: mgrmap e22: y(active, since 4s), standbys: x 2026-03-09T20:20:49.545 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:49 vm05 ceph-mon[61345]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:49.545 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:49 vm05 ceph-mon[61345]: mgrmap e22: y(active, since 4s), standbys: x 2026-03-09T20:20:49.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:49 vm09 ceph-mon[54524]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:49.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:49 vm09 ceph-mon[54524]: mgrmap e22: y(active, since 4s), standbys: x 2026-03-09T20:20:50.013 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:49 vm05 systemd[1]: Starting Ceph alertmanager.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:20:50.269 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 podman[92644]: 2026-03-09 20:20:50.013028434 +0000 UTC m=+0.076382806 volume create 7fa2ba9164284795ec3abc37de33aaa6029c88f66d3653b6290f512d257787b7 2026-03-09T20:20:50.269 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 podman[92644]: 2026-03-09 20:20:50.016517996 +0000 UTC m=+0.079872358 container create 5819767588bfdfbe4162967cca9d066dd73bc28af267fbf25a754776716c0fa2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T20:20:50.270 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 podman[92644]: 2026-03-09 20:20:49.945713773 +0000 UTC m=+0.009068145 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0 2026-03-09T20:20:50.270 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 podman[92644]: 2026-03-09 20:20:50.096178496 +0000 UTC m=+0.159532868 container init 5819767588bfdfbe4162967cca9d066dd73bc28af267fbf25a754776716c0fa2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T20:20:50.270 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 podman[92644]: 2026-03-09 20:20:50.106986587 +0000 UTC m=+0.170340959 container start 5819767588bfdfbe4162967cca9d066dd73bc28af267fbf25a754776716c0fa2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T20:20:50.270 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 bash[92644]: 5819767588bfdfbe4162967cca9d066dd73bc28af267fbf25a754776716c0fa2 2026-03-09T20:20:50.270 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 systemd[1]: Started Ceph alertmanager.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:20:50.270 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[92655]: ts=2026-03-09T20:20:50.120Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T20:20:50.270 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[92655]: ts=2026-03-09T20:20:50.120Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T20:20:50.270 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[92655]: ts=2026-03-09T20:20:50.121Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.105 port=9094 2026-03-09T20:20:50.270 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[92655]: ts=2026-03-09T20:20:50.122Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T20:20:50.270 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[92655]: ts=2026-03-09T20:20:50.144Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T20:20:50.270 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[92655]: ts=2026-03-09T20:20:50.144Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T20:20:50.270 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[92655]: ts=2026-03-09T20:20:50.146Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T20:20:50.270 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[92655]: ts=2026-03-09T20:20:50.146Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T20:20:50.637 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:20:50] ENGINE Bus STOPPING 2026-03-09T20:20:50.637 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:20:50] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T20:20:50.637 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:20:50] ENGINE Bus STOPPED 2026-03-09T20:20:50.637 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:20:50] ENGINE Bus STARTING 2026-03-09T20:20:50.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:20:50] ENGINE Serving on http://:::9283 2026-03-09T20:20:50.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:50 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:20:50] ENGINE Bus STARTED 2026-03-09T20:20:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:51 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:51 vm09 ceph-mon[54524]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:51 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:51 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:51 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:51 vm09 ceph-mon[54524]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T20:20:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:51 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:51 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:51 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:20:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:51 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:20:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:51 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:51 vm09 ceph-mon[54524]: Deploying daemon grafana.a on vm09 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[51870]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[51870]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[51870]: Deploying daemon grafana.a on vm09 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[61345]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[61345]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:51 vm05 ceph-mon[61345]: Deploying daemon grafana.a on vm09 2026-03-09T20:20:52.409 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:20:52 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[92655]: ts=2026-03-09T20:20:52.122Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000229668s 2026-03-09T20:20:53.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:53 vm09 ceph-mon[54524]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T20:20:53.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:53 vm05 ceph-mon[51870]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T20:20:53.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:53 vm05 ceph-mon[61345]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T20:20:55.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:55 vm09 ceph-mon[54524]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T20:20:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:55 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:55.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:20:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:20:55.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:55 vm05 ceph-mon[51870]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T20:20:55.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:55 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:55.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:55 vm05 ceph-mon[61345]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T20:20:55.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:55 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:56.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:20:56.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:20:56.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:20:57.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:57 vm09 ceph-mon[54524]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T20:20:57.736 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 systemd[1]: Starting Ceph grafana.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:20:57.914 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:57 vm05 ceph-mon[51870]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T20:20:57.914 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:57 vm05 ceph-mon[61345]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 podman[81651]: 2026-03-09 20:20:57.736152421 +0000 UTC m=+0.019233072 container create 82826c9f558ac40b47b5aceec014cbf22c07fed9ea3f3e656a517738f2d5cb8a (image=quay.io/ceph/grafana:10.4.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a, maintainer=Grafana Labs ) 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 podman[81651]: 2026-03-09 20:20:57.789199944 +0000 UTC m=+0.072280586 container init 82826c9f558ac40b47b5aceec014cbf22c07fed9ea3f3e656a517738f2d5cb8a (image=quay.io/ceph/grafana:10.4.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a, maintainer=Grafana Labs ) 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 podman[81651]: 2026-03-09 20:20:57.793407932 +0000 UTC m=+0.076488572 container start 82826c9f558ac40b47b5aceec014cbf22c07fed9ea3f3e656a517738f2d5cb8a (image=quay.io/ceph/grafana:10.4.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a, maintainer=Grafana Labs ) 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 bash[81651]: 82826c9f558ac40b47b5aceec014cbf22c07fed9ea3f3e656a517738f2d5cb8a 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 podman[81651]: 2026-03-09 20:20:57.728086704 +0000 UTC m=+0.011167364 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 systemd[1]: Started Ceph grafana.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.897882527Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-09T20:20:57Z 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.89813037Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898163232Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898177288Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898189811Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898202254Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898214457Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.89822635Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898238593Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898251326Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898262999Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898274841Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898286763Z level=info msg=Target target=[all] 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898300538Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898313032Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898324703Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898346925Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.898358878Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=settings t=2026-03-09T20:20:57.89837097Z level=info msg="App mode production" 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=sqlstore t=2026-03-09T20:20:57.898556127Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=sqlstore t=2026-03-09T20:20:57.89857911Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.898914739Z level=info msg="Starting DB migrations" 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.899947572Z level=info msg="Executing migration" id="create migration_log table" 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.900626081Z level=info msg="Migration successfully executed" id="create migration_log table" duration=678.449µs 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.901284153Z level=info msg="Executing migration" id="create user table" 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.901660278Z level=info msg="Migration successfully executed" id="create user table" duration=376.095µs 2026-03-09T20:20:57.988 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.902202492Z level=info msg="Executing migration" id="add unique index user.login" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.902642005Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=439.393µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.903137993Z level=info msg="Executing migration" id="add unique index user.email" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.903489742Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=351.629µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.903992383Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.904353569Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=361.427µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.90488311Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.905266688Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=383.859µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.905730205Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.906730548Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.000232ms 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.907178096Z level=info msg="Executing migration" id="create user table v2" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.907563948Z level=info msg="Migration successfully executed" id="create user table v2" duration=386.053µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.908032114Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.908385135Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=352.881µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.908887795Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.909230477Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=342.602µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.909698973Z level=info msg="Executing migration" id="copy data_source v1 to v2" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.909904299Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=205.365µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.910337539Z level=info msg="Executing migration" id="Drop old table user_v1" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.910712301Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=374.892µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.91117157Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.911677517Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=504.644µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.912224802Z level=info msg="Executing migration" id="Update user table charset" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.912255369Z level=info msg="Migration successfully executed" id="Update user table charset" duration=30.658µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.912744134Z level=info msg="Executing migration" id="Add last_seen_at column to user" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.913197041Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=451.133µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.913620164Z level=info msg="Executing migration" id="Add missing user data" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.913771467Z level=info msg="Migration successfully executed" id="Add missing user data" duration=151.633µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.914337055Z level=info msg="Executing migration" id="Add is_disabled column to user" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.914816012Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=479.047µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.915250485Z level=info msg="Executing migration" id="Add index user.login/user.email" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.915639874Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=389.529µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.916081351Z level=info msg="Executing migration" id="Add is_service_account column to user" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.916576398Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=495.036µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.917010519Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.919856216Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=2.845707ms 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.920384776Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.920872167Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=487.332µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.921309317Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.921430764Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=121.608µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.921959804Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.92232632Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=366.506µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.92285534Z level=info msg="Executing migration" id="create temp user table v1-7" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.923214953Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=359.582µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.923745617Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.924095141Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=349.594µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.924647685Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.925038938Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=391.734µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.925553029Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.925934163Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=381.123µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.926394304Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.926789013Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=393.427µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.927238824Z level=info msg="Executing migration" id="Update temp_user table charset" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.927269452Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=31.078µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.927639165Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.927988989Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=349.564µs 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.928435816Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 2026-03-09T20:20:57.989 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.928789227Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=353.511µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.92922377Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.929619321Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=395.57µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.930070535Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.930447571Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=377.117µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.930953037Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.932126394Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=1.174299ms 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.932662207Z level=info msg="Executing migration" id="create temp_user v2" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.933064109Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=401.802µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.933480708Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.933911525Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=430.435µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.934355806Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.934759031Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=403.695µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.935210626Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.935599244Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=388.607µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.936091886Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.936442021Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=350.035µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.936922671Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.937126192Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=203.802µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.937556307Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.937842052Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=285.725µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.938307393Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.938543986Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=236.684µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.93905347Z level=info msg="Executing migration" id="create star table" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.939388726Z level=info msg="Migration successfully executed" id="create star table" duration=334.886µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.939845622Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.940206989Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=361.155µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.940682669Z level=info msg="Executing migration" id="create org table v1" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.941046269Z level=info msg="Migration successfully executed" id="create org table v1" duration=363.591µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.941483408Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.942011336Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=527.777µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.942475385Z level=info msg="Executing migration" id="create org_user table v1" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.942993875Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=518.36µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.943449147Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.9438622Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=413.033µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.944327843Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.944727831Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=399.667µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.945167844Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.949783775Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=387.254µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.950404947Z level=info msg="Executing migration" id="Update org table charset" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.95043887Z level=info msg="Migration successfully executed" id="Update org table charset" duration=34.555µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.950940009Z level=info msg="Executing migration" id="Update org_user table charset" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.950976116Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=36.699µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.951330349Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.951455203Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=119.433µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.951984343Z level=info msg="Executing migration" id="create dashboard table" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.952392467Z level=info msg="Migration successfully executed" id="create dashboard table" duration=407.964µs 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.952866926Z level=info msg="Executing migration" id="add index dashboard.account_id" 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.953979738Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.116229ms 2026-03-09T20:20:57.990 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.95452016Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.954933163Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=413.143µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.955400798Z level=info msg="Executing migration" id="create dashboard_tag table" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.955766362Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=365.444µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.956254617Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.956679522Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=424.724µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.957201549Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.957579917Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=378.608µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.958049707Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.960111015Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=2.060687ms 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.960629966Z level=info msg="Executing migration" id="create dashboard v2" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.961012433Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=382.366µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.961491549Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.961900515Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=409.207µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.962409387Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.962802143Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=392.887µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.963269987Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.96349609Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=226.593µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.963964668Z level=info msg="Executing migration" id="drop table dashboard_v1" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.964415001Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=450.162µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.964938321Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.965009855Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=71.925µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.965554114Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.966304798Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=750.494µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.966863975Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.967591437Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=727.542µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.968188143Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.969045829Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=857.896µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.969601008Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.970049428Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=448.45µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.970983306Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.971909249Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=925.803µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.972448258Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.972926934Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=477.234µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.973406401Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.973842768Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=436.097µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.974360697Z level=info msg="Executing migration" id="Update dashboard table charset" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.974423524Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=63.058µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.974967383Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.975031723Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=64.832µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.975436511Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.976195672Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=760.113µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.976729772Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.977404805Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=674.712µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.977883641Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.978581899Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=698.408µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.979057629Z level=info msg="Executing migration" id="Add column uid in dashboard" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.979765123Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=707.465µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.980235754Z level=info msg="Executing migration" id="Update uid column values in dashboard" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.980390504Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=153.889µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.98092305Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.981288686Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=365.665µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.981769185Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.9821353Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=366.217µs 2026-03-09T20:20:57.991 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.982587506Z level=info msg="Executing migration" id="Update dashboard title length" 2026-03-09T20:20:57.992 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.982597224Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=10.189µs 2026-03-09T20:20:57.992 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.983103362Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-09T20:20:57.992 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.983472242Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=368.58µs 2026-03-09T20:20:57.992 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.983956368Z level=info msg="Executing migration" id="create dashboard_provisioning" 2026-03-09T20:20:57.992 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.984309079Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=352.37µs 2026-03-09T20:20:57.992 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.984825705Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-09T20:20:57.992 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.986664757Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=1.838963ms 2026-03-09T20:20:57.992 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.987130309Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 2026-03-09T20:20:58.242 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.994073676Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=6.940701ms 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.994752035Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.995145242Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=393.137µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.995714697Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.996120207Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=403.936µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.9966189Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.996811009Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=192.991µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.997269378Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.997588775Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=319.228µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.998030503Z level=info msg="Executing migration" id="Add check_sum column" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.998791407Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=760.815µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.99923166Z level=info msg="Executing migration" id="Add index for dashboard_title" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:57 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:57.999637099Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=404.346µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.000101499Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.000213569Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=112.189µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.000723804Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.000834942Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=111.148µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.001338634Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.001738212Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=399.729µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.002194777Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.002966972Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=773.318µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.003482147Z level=info msg="Executing migration" id="create data_source table" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.003941006Z level=info msg="Migration successfully executed" id="create data_source table" duration=458.649µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.004442284Z level=info msg="Executing migration" id="add index data_source.account_id" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.004849226Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=407.082µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.005323944Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.005728672Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=404.588µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.006237765Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.006665586Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=427.77µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.007127981Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.007539811Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=412.532µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.008067239Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.009893728Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=1.826709ms 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.010416226Z level=info msg="Executing migration" id="create data_source table v2" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.010847523Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=430.576µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.011375321Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.011819923Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=444.744µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.012318947Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.012743623Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=424.546µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.013229251Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.013540033Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=310.681µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.014015132Z level=info msg="Executing migration" id="Add column with_credentials" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.014845667Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=830.494µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.015335634Z level=info msg="Executing migration" id="Add secure json data column" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.0161722Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=836.626µs 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.016741395Z level=info msg="Executing migration" id="Update data_source table charset" 2026-03-09T20:20:58.243 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.016782222Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=41.177µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.017350596Z level=info msg="Executing migration" id="Update initial version to 1" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.017477794Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=127.639µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.018000281Z level=info msg="Executing migration" id="Add read_only data column" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.018829764Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=828.6µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.019325442Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.019455706Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=130.263µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.020012688Z level=info msg="Executing migration" id="Update json_data with nulls" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.020136721Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=123.893µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.020668485Z level=info msg="Executing migration" id="Add uid column" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.021476789Z level=info msg="Migration successfully executed" id="Add uid column" duration=808.073µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.021961477Z level=info msg="Executing migration" id="Update uid value" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.022090407Z level=info msg="Migration successfully executed" id="Update uid value" duration=129.151µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.02267448Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.023109646Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=434.644µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.023594883Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.024018757Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=423.253µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.024545092Z level=info msg="Executing migration" id="create api_key table" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.024933199Z level=info msg="Migration successfully executed" id="create api_key table" duration=387.776µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.025421172Z level=info msg="Executing migration" id="add index api_key.account_id" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.025869982Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=448.66µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.026352856Z level=info msg="Executing migration" id="add index api_key.key" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.026758516Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=405.5µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.027273308Z level=info msg="Executing migration" id="add index api_key.account_id_name" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.027723171Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=449.763µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.028271106Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.028692104Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=421.029µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.029179336Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.029606627Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=427.33µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.030129574Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.030551344Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=421.359µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.031442543Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.033719674Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=2.277672ms 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.034226924Z level=info msg="Executing migration" id="create api_key table v2" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.03462528Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=398.385µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.035121569Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.035562604Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=440.735µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.036121631Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.036545655Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=423.923µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.037036243Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.037442614Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=406.29µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.037946707Z level=info msg="Executing migration" id="copy api_key v1 to v2" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.038149657Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=203.39µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.038613636Z level=info msg="Executing migration" id="Drop old table api_key_v1" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.038902296Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=288.75µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.03941251Z level=info msg="Executing migration" id="Update api_key table charset" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.039446925Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=34.805µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.040011322Z level=info msg="Executing migration" id="Add expires to api_key table" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.040871321Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=859.97µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.041345139Z level=info msg="Executing migration" id="Add service account foreign key" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.042203955Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=858.696µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.042760808Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.042878028Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=117.499µs 2026-03-09T20:20:58.244 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.043410053Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.044321159Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=910.153µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.044843446Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.045767766Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=923.529µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.046236454Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.04662965Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=392.925µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.04715348Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.047443503Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=289.993µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.047979196Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.048384524Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=405.078µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.048950494Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.049384566Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=434.202µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.049860026Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.050302735Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=442.568µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.050800807Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.051223689Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=422.682µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.05173183Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.051809596Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=78.257µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.052327715Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.052364344Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=36.929µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.05290175Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.053837211Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=935.711µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.054354338Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.055287236Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=933.037µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.055893319Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.05595777Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=64.932µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.056505104Z level=info msg="Executing migration" id="create quota table v1" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.056881909Z level=info msg="Migration successfully executed" id="create quota table v1" duration=376.955µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.05744276Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.057875319Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=432.379µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.058364906Z level=info msg="Executing migration" id="Update quota table charset" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.058399381Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=34.945µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.058944731Z level=info msg="Executing migration" id="create plugin_setting table" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.05931282Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=368.419µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.059822654Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.060254082Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=431.639µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.060746173Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.061711821Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=965.437µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.062221765Z level=info msg="Executing migration" id="Update plugin_setting table charset" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.06225649Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=35.476µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.062866462Z level=info msg="Executing migration" id="create session table" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.063283021Z level=info msg="Migration successfully executed" id="create session table" duration=416.469µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.063867185Z level=info msg="Executing migration" id="Drop old table playlist table" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.063953065Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=86.271µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.064456267Z level=info msg="Executing migration" id="Drop old table playlist_item table" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.064546868Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=91.011µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.065104531Z level=info msg="Executing migration" id="create playlist table v2" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.065468883Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=364.263µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.065997082Z level=info msg="Executing migration" id="create playlist item table v2" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.06635448Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=357.298µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.06702727Z level=info msg="Executing migration" id="Update playlist table charset" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.067063277Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=36.758µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.067611113Z level=info msg="Executing migration" id="Update playlist_item table charset" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.067646619Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=36.248µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.068241503Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.069311065Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=1.069591ms 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.069906449Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.07093315Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=1.026642ms 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.071418169Z level=info msg="Executing migration" id="drop preferences table v2" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.071496225Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=77.746µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.072040604Z level=info msg="Executing migration" id="drop preferences table v3" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.072120945Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=80.551µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.072701631Z level=info msg="Executing migration" id="create preferences table v3" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.073087243Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=385.643µs 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.073585185Z level=info msg="Executing migration" id="Update preferences table charset" 2026-03-09T20:20:58.245 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.07362554Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=36.199µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.074140705Z level=info msg="Executing migration" id="Add column team_id in preferences" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.07518012Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=1.039475ms 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.075648036Z level=info msg="Executing migration" id="Update team_id column values in preferences" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.07575666Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=109.013µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.076301439Z level=info msg="Executing migration" id="Add column week_start in preferences" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.077371452Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=1.070103ms 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.077890504Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.078928236Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=1.037633ms 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.07940553Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.079468718Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=63.408µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.080101652Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.080597892Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=496.891µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.081158861Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.081625685Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=466.594µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.082123648Z level=info msg="Executing migration" id="create alert table v1" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.082648269Z level=info msg="Migration successfully executed" id="create alert table v1" duration=524.461µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.083240969Z level=info msg="Executing migration" id="add index alert org_id & id " 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.083735564Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=494.656µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.084323826Z level=info msg="Executing migration" id="add index alert state" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.084751535Z level=info msg="Migration successfully executed" id="add index alert state" duration=427.64µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.085285676Z level=info msg="Executing migration" id="add index alert dashboard_id" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.085726861Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=441.115µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.086228781Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.086590417Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=361.666µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.087090304Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.087563499Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=473.075µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.088084434Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.088528906Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=444.472µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.089013253Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.092230886Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=3.214055ms 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.092939552Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.093357825Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=418.223µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.093858181Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.094298136Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=439.654µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.094795626Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.094987445Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=192.179µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.095447467Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.095760593Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=313.366µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.09624551Z level=info msg="Executing migration" id="create alert_notification table v1" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.096611145Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=366.596µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.097056569Z level=info msg="Executing migration" id="Add column is_default" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.098237298Z level=info msg="Migration successfully executed" id="Add column is_default" duration=1.18048ms 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.098715052Z level=info msg="Executing migration" id="Add column frequency" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.099972927Z level=info msg="Migration successfully executed" id="Add column frequency" duration=1.257604ms 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.100459998Z level=info msg="Executing migration" id="Add column send_reminder" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.101749223Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=1.289274ms 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.102233879Z level=info msg="Executing migration" id="Add column disable_resolve_message" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.103342604Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=1.108736ms 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.103857859Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.104263137Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=405.188µs 2026-03-09T20:20:58.246 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.104708742Z level=info msg="Executing migration" id="Update alert table charset" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.104740351Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=32.18µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.10523124Z level=info msg="Executing migration" id="Update alert_notification table charset" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.10526364Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=32.892µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.105640355Z level=info msg="Executing migration" id="create notification_journal table v1" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.105975212Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=334.336µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.106417741Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.106838097Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=420.096µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.107381505Z level=info msg="Executing migration" id="drop alert_notification_journal" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.107777034Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=395.389µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.108277422Z level=info msg="Executing migration" id="create alert_notification_state table v1" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.108651822Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=374.36µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.109102707Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.10950553Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=402.714µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.109983385Z level=info msg="Executing migration" id="Add for to alert table" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.111109412Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=1.125927ms 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.111575575Z level=info msg="Executing migration" id="Add column uid in alert_notification" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.112720869Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=1.145986ms 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.113172144Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.113285866Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=113.762µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.113753813Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.114151296Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=397.403µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.114588485Z level=info msg="Executing migration" id="Remove unique index org_id_name" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.115003812Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=415.898µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.115447493Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.116689568Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=1.242164ms 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.117134049Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.117190595Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=57.076µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.117677777Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.118077024Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=399.307µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.118535543Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.118998369Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=461.643µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.119478758Z level=info msg="Executing migration" id="Drop old annotation table v4" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.119557485Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=80.07µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.12015782Z level=info msg="Executing migration" id="create annotation table v5" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.120574709Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=416.889µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.121017377Z level=info msg="Executing migration" id="add index annotation 0 v3" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.121419Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=401.603µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.121873821Z level=info msg="Executing migration" id="add index annotation 1 v3" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.122265926Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=392.124µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.122702752Z level=info msg="Executing migration" id="add index annotation 2 v3" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.123100788Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=398.276µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.123571288Z level=info msg="Executing migration" id="add index annotation 3 v3" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.12401Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=438.572µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.124468788Z level=info msg="Executing migration" id="add index annotation 4 v3" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.124918921Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=450.153µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.125378411Z level=info msg="Executing migration" id="Update annotation table charset" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.125408608Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=30.687µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.125880491Z level=info msg="Executing migration" id="Add column region_id to annotation table" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.127122686Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=1.242174ms 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.127661214Z level=info msg="Executing migration" id="Drop category_id index" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.128063377Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=402.152µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.128533186Z level=info msg="Executing migration" id="Add column tags to annotation table" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.12975873Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=1.225885ms 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.13022924Z level=info msg="Executing migration" id="Create annotation_tag table v2" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.13056528Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=335.94µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.130991948Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.13141997Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=427.049µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.131886683Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.132290149Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=403.426µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.132731695Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.135987379Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=3.255484ms 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.136451006Z level=info msg="Executing migration" id="Create annotation_tag table v3" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.136788599Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=336.15µs 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.137231939Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-09T20:20:58.247 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.137663927Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=431.897µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.13810909Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.138279389Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=170.208µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.138666444Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.13893689Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=270.566µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.13934885Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.13945543Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=106.729µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.139979291Z level=info msg="Executing migration" id="Add created time to annotation table" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.141185809Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=1.207149ms 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.14165101Z level=info msg="Executing migration" id="Add updated time to annotation table" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.142830267Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=1.180078ms 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.143288295Z level=info msg="Executing migration" id="Add index for created in annotation table" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.143703853Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=415.246µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.144168222Z level=info msg="Executing migration" id="Add index for updated in annotation table" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.14457334Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=403.575µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.145023542Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.145151102Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=127.619µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.145601915Z level=info msg="Executing migration" id="Add epoch_end column" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.146813502Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=1.211557ms 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.147256532Z level=info msg="Executing migration" id="Add index for epoch_end" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.147669745Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=413.404µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.148114128Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.148216148Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=103.253µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.148681289Z level=info msg="Executing migration" id="Move region to single row" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.148869863Z level=info msg="Migration successfully executed" id="Move region to single row" duration=188.824µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.149359548Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.149772833Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=413.144µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.150184533Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.15059488Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=409.856µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.151065993Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.151468186Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=402.744µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.151963934Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.152356829Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=392.815µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.15278978Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.153181061Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=391.523µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.153638348Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.154028137Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=389.679µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.154452332Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.154527753Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=45.465µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.155025985Z level=info msg="Executing migration" id="create test_data table" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.1553809Z level=info msg="Migration successfully executed" id="create test_data table" duration=354.685µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.155869404Z level=info msg="Executing migration" id="create dashboard_version table v1" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.156224138Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=354.544µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.156663811Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.157056245Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=392.434µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.157507861Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.157933898Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=425.867µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.158414047Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.158535865Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=122.118µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.159059195Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.159255683Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=196.768µs 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.15974078Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 2026-03-09T20:20:58.248 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.159808546Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=68.117µs 2026-03-09T20:20:58.249 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.160304125Z level=info msg="Executing migration" id="create team table" 2026-03-09T20:20:58.249 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.160652045Z level=info msg="Migration successfully executed" id="create team table" duration=346.007µs 2026-03-09T20:20:58.249 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.161119902Z level=info msg="Executing migration" id="add index team.org_id" 2026-03-09T20:20:58.249 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.161589761Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=469.608µs 2026-03-09T20:20:58.249 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.162103603Z level=info msg="Executing migration" id="add unique index team_org_id_name" 2026-03-09T20:20:58.249 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.162536704Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=433.081µs 2026-03-09T20:20:58.249 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.163007314Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-09T20:20:58.249 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.164307318Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=1.299952ms 2026-03-09T20:20:58.249 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.165009132Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-09T20:20:58.249 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.165121091Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=112.17µs 2026-03-09T20:20:58.249 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.165619885Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-09T20:20:58.249 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.166038909Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=418.703µs 2026-03-09T20:20:58.249 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.166505353Z level=info msg="Executing migration" id="create team member table" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.166857501Z level=info msg="Migration successfully executed" id="create team member table" duration=352.21µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.167334585Z level=info msg="Executing migration" id="add index team_member.org_id" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.167740284Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=405.538µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.168222426Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.168649786Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=427.19µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.169166103Z level=info msg="Executing migration" id="add index team_member.team_id" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.169585718Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=420.587µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.170025601Z level=info msg="Executing migration" id="Add column email to team table" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.171402809Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=1.377188ms 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.172002191Z level=info msg="Executing migration" id="Add column external to team_member table" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.173428651Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=1.42636ms 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.173894193Z level=info msg="Executing migration" id="Add column permission to team_member table" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.175223371Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=1.328376ms 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.175687049Z level=info msg="Executing migration" id="create dashboard acl table" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.17612635Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=439.162µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.176612521Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.177037476Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=424.875µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.177474254Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.17794267Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=468.466µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.178468926Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.178900634Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=431.618µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.179354213Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.179764431Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=410.187µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.18021845Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.180646652Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=428.212µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.181108606Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.181551826Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=442.959µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.181987271Z level=info msg="Executing migration" id="add index dashboard_permission" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.182414211Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=426.689µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.18288442Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.183155178Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=270.788µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.183658199Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.183801036Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=141.886µs 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.184303106Z level=info msg="Executing migration" id="create tag table" 2026-03-09T20:20:58.250 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.184670974Z level=info msg="Migration successfully executed" id="create tag table" duration=367.527µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.185121558Z level=info msg="Executing migration" id="add index tag.key_value" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.185551472Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=429.733µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.185995664Z level=info msg="Executing migration" id="create login attempt table" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.186317907Z level=info msg="Migration successfully executed" id="create login attempt table" duration=322.423µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.186773911Z level=info msg="Executing migration" id="add index login_attempt.username" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.18717988Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=404.897µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.187616207Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.188051803Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=435.626µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.188541579Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.192953297Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=4.411477ms 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.193468692Z level=info msg="Executing migration" id="create login_attempt v2" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.193816353Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=347.811µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.194278648Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.194702672Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=424.193µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.19518274Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.19535894Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=176.31µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.195823439Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.196107141Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=283.812µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.196634117Z level=info msg="Executing migration" id="create user auth table" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.196953886Z level=info msg="Migration successfully executed" id="create user auth table" duration=320.16µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.197385514Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.197809667Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=424.023µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.198248589Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.198305265Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=57.838µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.198799661Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.200256067Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=1.456375ms 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.200735033Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.202160461Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=1.425658ms 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.202605174Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.204038135Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=1.432752ms 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.204482187Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.205923264Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=1.440716ms 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.206353249Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.206781962Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=428.682µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.207287758Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.208791392Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=1.503643ms 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.2092227Z level=info msg="Executing migration" id="create server_lock table" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.209600097Z level=info msg="Migration successfully executed" id="create server_lock table" duration=377.696µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.21008783Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.210503197Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=415.568µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.210978858Z level=info msg="Executing migration" id="create user auth token table" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.211339252Z level=info msg="Migration successfully executed" id="create user auth token table" duration=360.404µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.2117871Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.212196707Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=409.506µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.212698978Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.213108994Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=410.056µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.213577582Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.214031011Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=453.178µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.214481102Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.216035893Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=1.55477ms 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.216467521Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.216900912Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=433.44µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.21734332Z level=info msg="Executing migration" id="create cache_data table" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.217734683Z level=info msg="Migration successfully executed" id="create cache_data table" duration=390.891µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.218236232Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.218666677Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=430.145µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.21912277Z level=info msg="Executing migration" id="create short_url table v1" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.219507341Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=382.948µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.219987148Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.22041468Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=427.552µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.220923231Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.22098155Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=58.63µs 2026-03-09T20:20:58.251 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.221440039Z level=info msg="Executing migration" id="delete alert_definition table" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.221518045Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=78.036µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.221984187Z level=info msg="Executing migration" id="recreate alert_definition table" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.222349541Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=365.314µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.222779626Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.223206305Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=426.589µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.223681955Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.224124744Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=442.589µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.224594472Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.22465202Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=58.058µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.225129454Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.225565721Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=437.498µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.22600339Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.22640963Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=406.26µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.226913143Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.227348438Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=435.174µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.227788121Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.228221321Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=433.111µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.228642089Z level=info msg="Executing migration" id="Add column paused in alert_definition" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.230273383Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=1.630884ms 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.230711152Z level=info msg="Executing migration" id="drop alert_definition table" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.231125417Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=415.026µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.231589276Z level=info msg="Executing migration" id="delete alert_definition_version table" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.231659167Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=69.85µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.23216269Z level=info msg="Executing migration" id="recreate alert_definition_version table" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.232558711Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=395.82µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.232993273Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.233426916Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=433.561µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.233881195Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.234318645Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=437.61µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.234759319Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.234815885Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=57.187µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.235289712Z level=info msg="Executing migration" id="drop alert_definition_version table" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.235712443Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=422.69µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.236173777Z level=info msg="Executing migration" id="create alert_instance table" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.236590617Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=416.6µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.237031964Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.237469582Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=437.409µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.237952988Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.238381078Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=426.598µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.239038689Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.240748249Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=1.70951ms 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.241213259Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.241633667Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=420.238µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.242107985Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.242526088Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=418.003µs 2026-03-09T20:20:58.252 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.242970128Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.257667518Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=14.694035ms 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.343641374Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.351300331Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=7.659518ms 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.352020238Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.352497281Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=477.484µs 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.353028835Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.353478688Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=450.524µs 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.35399317Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.355647527Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=1.654387ms 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.356117387Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.357728632Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=1.611226ms 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.358205596Z level=info msg="Executing migration" id="create alert_rule table" 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.35881679Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=610.784µs 2026-03-09T20:20:58.493 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.359354876Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.359879859Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=525.093µs 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.360296449Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.360720964Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=424.555µs 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.361194059Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.361633322Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=439.032µs 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.362069568Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.362101488Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=32.301µs 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.362543806Z level=info msg="Executing migration" id="add column for to alert_rule" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.364273313Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=1.729006ms 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.364706544Z level=info msg="Executing migration" id="add column annotations to alert_rule" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.366309574Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=1.60295ms 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.366725132Z level=info msg="Executing migration" id="add column labels to alert_rule" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.36833721Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=1.611697ms 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.36889331Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.369346919Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=455.053µs 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.3697909Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.370164931Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=373.85µs 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.370606527Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.372203457Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=1.596388ms 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.37265974Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.374244878Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=1.585158ms 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.374675654Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.37504256Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=366.676µs 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.375465552Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.377079653Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=1.613891ms 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.377496193Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.379186757Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=1.690383ms 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.379616542Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.379641288Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=25.147µs 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.380120816Z level=info msg="Executing migration" id="create alert_rule_version table" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.380583122Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=462.236µs 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.381014649Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.381413405Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=398.195µs 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.381850223Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.38227037Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=419.847µs 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.382691397Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.382716524Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=25.688µs 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.383145668Z level=info msg="Executing migration" id="add column for to alert_rule_version" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.384893971Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=1.745387ms 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.385325699Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.387060195Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=1.734006ms 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.387457218Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.38915183Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=1.694592ms 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.389631058Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.391305952Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=1.674694ms 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.391743852Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.39339893Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=1.655008ms 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.393838052Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.393862407Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=24.575µs 2026-03-09T20:20:58.494 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.394359428Z level=info msg="Executing migration" id=create_alert_configuration_table 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.394687943Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=328.234µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.395086739Z level=info msg="Executing migration" id="Add column default in alert_configuration" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.396859436Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=1.772686ms 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.397285785Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.39730982Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=24.296µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.397775101Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.399495711Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=1.72058ms 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.399985809Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.400361963Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=377.366µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.400801146Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.402590324Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=1.788999ms 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.403057749Z level=info msg="Executing migration" id=create_ngalert_configuration_table 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.403359483Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=302.877µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.40379035Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.40414852Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=357.89µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.404583395Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.406368837Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=1.785101ms 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.406791207Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.407091559Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=300.903µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.407542974Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.407945257Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=400.529µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.408336831Z level=info msg="Executing migration" id="create alert_image table" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.408660947Z level=info msg="Migration successfully executed" id="create alert_image table" duration=323.786µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.409041699Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.40939415Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=352.231µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.409909243Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.409933669Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=24.526µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.410451719Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.410841588Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=387.796µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.411273377Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.411654149Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=380.833µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.412078374Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.412206744Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.41261047Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.412821715Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=211.166µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.413238646Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.413617464Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=377.827µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.414030829Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.41591237Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=1.881482ms 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.416344849Z level=info msg="Executing migration" id="create library_element table v1" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.416785214Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=440.816µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.41722163Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.417637429Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=415.788µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.418083153Z level=info msg="Executing migration" id="create library_element_connection table v1" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.418403774Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=320.761µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.418830502Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.419389219Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=558.286µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.419836896Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.420198494Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=361.508µs 2026-03-09T20:20:58.495 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.420598221Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.420609803Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=10.8µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.421068803Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.421092717Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=24.095µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.42144154Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.421593023Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=152.565µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.422012699Z level=info msg="Executing migration" id="create data_keys table" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.422386178Z level=info msg="Migration successfully executed" id="create data_keys table" duration=373.769µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.422820791Z level=info msg="Executing migration" id="create secrets table" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.423143386Z level=info msg="Migration successfully executed" id="create secrets table" duration=322.595µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.423555807Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.432828494Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=9.270472ms 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.43345147Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.435560948Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.109477ms 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.436040326Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.436104406Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=62.637µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.436590685Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.446121035Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=9.524397ms 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.446726287Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.456107817Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=9.379246ms 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.45674448Z level=info msg="Executing migration" id="create kv_store table v1" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.457128338Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=384.119µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.457607805Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.458027581Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=419.496µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.458478295Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.458610192Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=131.947µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.459060124Z level=info msg="Executing migration" id="create permission table" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.459431439Z level=info msg="Migration successfully executed" id="create permission table" duration=370.994µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.459910916Z level=info msg="Executing migration" id="add unique index permission.role_id" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.460274156Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=364.903µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.46068746Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.461073132Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=385.282µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.461466278Z level=info msg="Executing migration" id="create role table" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.461827934Z level=info msg="Migration successfully executed" id="create role table" duration=361.397µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.462222262Z level=info msg="Executing migration" id="add column display_name" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.464399978Z level=info msg="Migration successfully executed" id="add column display_name" duration=2.178236ms 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.464882913Z level=info msg="Executing migration" id="add column group_name" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.466902162Z level=info msg="Migration successfully executed" id="add column group_name" duration=2.019489ms 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.467358918Z level=info msg="Executing migration" id="add index role.org_id" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.467778052Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=418.944µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.468197156Z level=info msg="Executing migration" id="add unique index role_org_id_name" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.468626049Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=428.773µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.469179515Z level=info msg="Executing migration" id="add index role_org_id_uid" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.46964649Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=467.445µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.470068299Z level=info msg="Executing migration" id="create team role table" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.470402625Z level=info msg="Migration successfully executed" id="create team role table" duration=334.137µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.470845323Z level=info msg="Executing migration" id="add index team_role.org_id" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.471252686Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=405.299µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.47169302Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.472111444Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=418.154µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.472563118Z level=info msg="Executing migration" id="add index team_role.team_id" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.472923203Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=359.883µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.473324755Z level=info msg="Executing migration" id="create user role table" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.473668117Z level=info msg="Migration successfully executed" id="create user role table" duration=343.392µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.474124702Z level=info msg="Executing migration" id="add index user_role.org_id" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.474525341Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=388.086µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.474939587Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.475312435Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=372.517µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.475745135Z level=info msg="Executing migration" id="add index user_role.user_id" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.476118624Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=373.549µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.476608791Z level=info msg="Executing migration" id="create builtin role table" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.476932688Z level=info msg="Migration successfully executed" id="create builtin role table" duration=323.757µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.477353195Z level=info msg="Executing migration" id="add index builtin_role.role_id" 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.477747694Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=394.379µs 2026-03-09T20:20:58.496 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.478154134Z level=info msg="Executing migration" id="add index builtin_role.name" 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.478556207Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=401.963µs 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.478970482Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.481298469Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=2.327587ms 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.481738623Z level=info msg="Executing migration" id="add index builtin_role.org_id" 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.482108124Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=369.402µs 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.482550012Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.482926636Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=376.715µs 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.48333924Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.483733537Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=393.436µs 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.484214127Z level=info msg="Executing migration" id="add unique index role.uid" 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.484627551Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=413.584µs 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.485034302Z level=info msg="Executing migration" id="create seed assignment table" 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.485334775Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=300.402µs 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.485731617Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.486110817Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=378.709µs 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.486543086Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.488823866Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.280549ms 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.489274188Z level=info msg="Executing migration" id="permission kind migration" 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.491491729Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.216029ms 2026-03-09T20:20:58.497 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.491970505Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-09T20:20:58.746 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.494103488Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.132921ms 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.497456153Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.499721974Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.266051ms 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.500347075Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.500890612Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=542.165µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.501427967Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.501991703Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=563.355µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.502541282Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.503034615Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=493.423µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.503538699Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.503987809Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=449µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.504470503Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.504968024Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=497.351µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.505441471Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.505560062Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=119.042µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.506091306Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.506176185Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=85.41µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.506580171Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.506848844Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=268.883µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.507387342Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.507761192Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=374.441µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.508340176Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.508758098Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=418.673µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.509369753Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.509565359Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=195.987µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.510148341Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.510453882Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=305.512µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.511009021Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.511437523Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=428.211µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.511994416Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.512669248Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=675.083µs 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.513223967Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.516471036Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=3.247038ms 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.517049087Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-09T20:20:58.747 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.517146689Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=96.41µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.517719943Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.518354421Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=634.247µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.519008234Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.519662078Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=652.793µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.520280807Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.520927486Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=646.881µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.521490371Z level=info msg="Executing migration" id="add correlation config column" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.524958913Z level=info msg="Migration successfully executed" id="add correlation config column" duration=3.465607ms 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.525780421Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.526368681Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=588.251µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.526935492Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.52745794Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=522.168µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.528017668Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.534704694Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=6.686235ms 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.535401839Z level=info msg="Executing migration" id="create correlation v2" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.536080921Z level=info msg="Migration successfully executed" id="create correlation v2" duration=678.82µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.5368124Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.53738394Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=571.649µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.537953186Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.538565861Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=612.677µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.53912599Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.539674548Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=548.508µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.540219968Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.540410294Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=190.446µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.540985691Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.541446884Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=461.113µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.541975895Z level=info msg="Executing migration" id="add provisioning column" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.544438414Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.462619ms 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.544971592Z level=info msg="Executing migration" id="create entity_events table" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.545364298Z level=info msg="Migration successfully executed" id="create entity_events table" duration=391.664µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.545905711Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.546393373Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=487.463µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.546913406Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.547151653Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.547730306Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.547971457Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.548540803Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.548940231Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=400.029µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.549478378Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.54994444Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=466.113µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.550433716Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.550950804Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=517.109µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.551456049Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.551987444Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=531.366µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.552505233Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.553002244Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=497.151µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.553489717Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.553999Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=509.564µs 2026-03-09T20:20:58.748 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.554489418Z level=info msg="Executing migration" id="Drop public config table" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.554898103Z level=info msg="Migration successfully executed" id="Drop public config table" duration=408.615µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.555422875Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.555925726Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=502.731µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.556405946Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.556914787Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=507.779µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.557403202Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.557935778Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=532.276µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.558442738Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.558946269Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=503.522µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.559442579Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.568466109Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=9.022559ms 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.569060882Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.571558027Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=2.497084ms 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.572143082Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.574635427Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=2.492275ms 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.57517711Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.575364612Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=187.772µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.575948955Z level=info msg="Executing migration" id="add share column" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.578962135Z level=info msg="Migration successfully executed" id="add share column" duration=3.014142ms 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.579609878Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.579808059Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=198.28µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.580453397Z level=info msg="Executing migration" id="create file table" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.581049081Z level=info msg="Migration successfully executed" id="create file table" duration=595.975µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.581652351Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.582342583Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=690.142µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.582885899Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.583394883Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=508.903µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.586526024Z level=info msg="Executing migration" id="create file_meta table" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.586905604Z level=info msg="Migration successfully executed" id="create file_meta table" duration=379.672µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.587402053Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.587911036Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=509.134µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.588551755Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.588647063Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=96.309µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.5891737Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.589266262Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=92.983µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.589721395Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.59014679Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=424.675µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.590772472Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.590937591Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=165.469µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.591496067Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.592281366Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=785.52µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.592898091Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.595350041Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=2.453102ms 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.595876425Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.596019903Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=143.758µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.59657311Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.597136194Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=563.204µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.597645296Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.597884954Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=239.949µs 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.598437208Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-09T20:20:58.749 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.598612176Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=175.168µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.599145514Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.599402916Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=257.401µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.59999742Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.602499683Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=2.502764ms 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.603013274Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.605468009Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=2.454465ms 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.605988032Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.60648375Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=495.648µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.607046523Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.635492328Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=28.444652ms 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.636208519Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.636922185Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=711.883µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.637535362Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.638165111Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=631.523µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.638688241Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.647497299Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=8.808456ms 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.648174787Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.650829937Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=2.65497ms 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.65144572Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.651735863Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=290.233µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.652381671Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.652567549Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=185.858µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.653174305Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.653361415Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=187.341µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.653926773Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.654110087Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=183.403µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.654705351Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.654893714Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=188.743µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.655529324Z level=info msg="Executing migration" id="create folder table" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.656064265Z level=info msg="Migration successfully executed" id="create folder table" duration=534.8µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.656624764Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.6572854Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=660.606µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.657830721Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.658570566Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=738.543µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.659113853Z level=info msg="Executing migration" id="Update folder title length" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.659202188Z level=info msg="Migration successfully executed" id="Update folder title length" duration=89.107µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.659749302Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.660319229Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=569.797µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.660840605Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.661388981Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=548.487µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.661916689Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.662503788Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=586.838µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.663026737Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.663308674Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=281.957µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.66386146Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.664058148Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=196.879µs 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.664582498Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-09T20:20:58.750 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.665097402Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=514.833µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.665604352Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.66614308Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=538.557µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.666608081Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.667123174Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=515.093µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.667671211Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.668209828Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=538.848µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.668720555Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.669240187Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=520.182µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.66975465Z level=info msg="Executing migration" id="create anon_device table" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.670177102Z level=info msg="Migration successfully executed" id="create anon_device table" duration=423.443µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.670668581Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.671242605Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=573.804µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.671749845Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.672284766Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=535.914µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.672840656Z level=info msg="Executing migration" id="create signing_key table" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.673300548Z level=info msg="Migration successfully executed" id="create signing_key table" duration=459.911µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.673853693Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.674379748Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=525.393µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.67487256Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.675399136Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=526.625µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.675919149Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.676127419Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=208.73µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.676656189Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.679391568Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=2.735008ms 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.67994267Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.680406208Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=464.138µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.680972568Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.681584703Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=611.906µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.682088947Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.682674543Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=575.788µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.683168007Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.683730891Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=562.884µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.684256745Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.68480969Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=552.723µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.685352476Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.685902785Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=549.238µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.686365522Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.68686124Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=495.98µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.687408684Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.687883604Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=475.791µs 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.688379441Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-09T20:20:58.751 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.688584696Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=205.634µs 2026-03-09T20:20:58.752 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.689139925Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-09T20:20:58.752 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.689238269Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=99.416µs 2026-03-09T20:20:58.752 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.689699604Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-09T20:20:58.752 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.692805497Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=3.102257ms 2026-03-09T20:20:58.752 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.69358714Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-09T20:20:58.752 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.696412177Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.824767ms 2026-03-09T20:20:58.752 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.696949333Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-09T20:20:58.752 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.697179564Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=230.472µs 2026-03-09T20:20:58.752 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=migrator t=2026-03-09T20:20:58.697714956Z level=info msg="migrations completed" performed=547 skipped=0 duration=797.796198ms 2026-03-09T20:20:58.752 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=sqlstore t=2026-03-09T20:20:58.698380301Z level=info msg="Created default organization" 2026-03-09T20:20:58.752 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=secrets t=2026-03-09T20:20:58.699102012Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-09T20:20:58.752 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=plugin.store t=2026-03-09T20:20:58.707137202Z level=info msg="Loading plugins..." 2026-03-09T20:20:58.752 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=local.finder t=2026-03-09T20:20:58.746872879Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-09T20:20:58.823 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:20:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:20:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:20:58.824 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:58 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:58.824 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:58 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:58.824 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:58 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:58.824 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:58 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:58.824 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:58 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:58.824 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:58 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:58.824 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:58 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:58.824 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:58 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:58 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:58 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:58 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:58 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:20:59.022 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=plugin.store t=2026-03-09T20:20:58.747105956Z level=info msg="Plugins loaded" count=55 duration=39.968683ms 2026-03-09T20:20:59.022 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=query_data t=2026-03-09T20:20:58.760180479Z level=info msg="Query Service initialization" 2026-03-09T20:20:59.022 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=live.push_http t=2026-03-09T20:20:58.76166138Z level=info msg="Live Push Gateway initialization" 2026-03-09T20:20:59.022 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=ngalert.migration t=2026-03-09T20:20:58.762684466Z level=info msg=Starting 2026-03-09T20:20:59.022 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=ngalert.migration t=2026-03-09T20:20:58.762868029Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 2026-03-09T20:20:59.022 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=ngalert.migration orgID=1 t=2026-03-09T20:20:58.763036595Z level=info msg="Migrating alerts for organisation" 2026-03-09T20:20:59.022 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=ngalert.migration orgID=1 t=2026-03-09T20:20:58.763307732Z level=info msg="Alerts found to migrate" alerts=0 2026-03-09T20:20:59.022 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=ngalert.migration t=2026-03-09T20:20:58.763991743Z level=info msg="Completed alerting migration" 2026-03-09T20:20:59.022 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=ngalert.state.manager t=2026-03-09T20:20:58.770374359Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-09T20:20:59.022 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=infra.usagestats.collector t=2026-03-09T20:20:58.77126143Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-09T20:20:59.022 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=provisioning.datasources t=2026-03-09T20:20:58.772318108Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=provisioning.datasources t=2026-03-09T20:20:58.776711672Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=provisioning.alerting t=2026-03-09T20:20:58.781133448Z level=info msg="starting to provision alerting" 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=provisioning.alerting t=2026-03-09T20:20:58.781141633Z level=info msg="finished to provision alerting" 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=grafanaStorageLogger t=2026-03-09T20:20:58.781338562Z level=info msg="Storage starting" 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=http.server t=2026-03-09T20:20:58.782687747Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=http.server t=2026-03-09T20:20:58.78305264Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=ngalert.state.manager t=2026-03-09T20:20:58.783154862Z level=info msg="Warming state cache for startup" 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=ngalert.state.manager t=2026-03-09T20:20:58.78337806Z level=info msg="State cache has been initialized" states=0 duration=222.987µs 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=provisioning.dashboard t=2026-03-09T20:20:58.784208654Z level=info msg="starting to provision dashboards" 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=sqlstore.transactions t=2026-03-09T20:20:58.794155552Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=ngalert.multiorg.alertmanager t=2026-03-09T20:20:58.795040529Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=ngalert.scheduler t=2026-03-09T20:20:58.795125888Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=ticker t=2026-03-09T20:20:58.795202283Z level=info msg=starting first_tick=2026-03-09T20:21:00Z 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=sqlstore.transactions t=2026-03-09T20:20:58.804473777Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=sqlstore.transactions t=2026-03-09T20:20:58.81497866Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=sqlstore.transactions t=2026-03-09T20:20:58.826482001Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked" 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=sqlstore.transactions t=2026-03-09T20:20:58.837292516Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=4 code="database is locked" 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=plugins.update.checker t=2026-03-09T20:20:58.893593279Z level=info msg="Update check succeeded" duration=99.150619ms 2026-03-09T20:20:59.023 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=provisioning.dashboard t=2026-03-09T20:20:58.89999896Z level=info msg="finished to provision dashboards" 2026-03-09T20:20:59.522 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:59 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=grafana-apiserver t=2026-03-09T20:20:59.115111927Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-09T20:20:59.522 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:20:59 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=grafana-apiserver t=2026-03-09T20:20:59.115596343Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-09T20:21:00.125 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:59 vm05 ceph-mon[51870]: Deploying daemon node-exporter.a on vm05 2026-03-09T20:21:00.125 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:59 vm05 ceph-mon[51870]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T20:21:00.125 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:59 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:00.125 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:20:59 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:00.125 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:59 vm05 ceph-mon[61345]: Deploying daemon node-exporter.a on vm05 2026-03-09T20:21:00.125 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:59 vm05 ceph-mon[61345]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T20:21:00.125 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:59 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:00.125 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:20:59 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:00.125 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:20:59 vm05 bash[92886]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a 2026-03-09T20:21:00.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:59 vm09 ceph-mon[54524]: Deploying daemon node-exporter.a on vm05 2026-03-09T20:21:00.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:59 vm09 ceph-mon[54524]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T20:21:00.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:59 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:00.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:20:59 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:00.409 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[92655]: ts=2026-03-09T20:21:00.126Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003914634s 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 bash[92886]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 bash[92886]: Writing manifest to image destination 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 podman[92886]: 2026-03-09 20:21:00.638807047 +0000 UTC m=+2.113806661 container create e166fd129dd7132b9170740eb2da3e544c3c884893a368de6b95bcd42f2c7263 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 podman[92886]: 2026-03-09 20:21:00.631722426 +0000 UTC m=+2.106722040 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 podman[92886]: 2026-03-09 20:21:00.705193382 +0000 UTC m=+2.180192996 container init e166fd129dd7132b9170740eb2da3e544c3c884893a368de6b95bcd42f2c7263 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 podman[92886]: 2026-03-09 20:21:00.711514715 +0000 UTC m=+2.186514329 container start e166fd129dd7132b9170740eb2da3e544c3c884893a368de6b95bcd42f2c7263 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 bash[92886]: e166fd129dd7132b9170740eb2da3e544c3c884893a368de6b95bcd42f2c7263 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.717Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.717Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.717Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T20:21:00.912 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.718Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.719Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a[92942]: ts=2026-03-09T20:21:00.719Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T20:21:00.913 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:21:00 vm05 systemd[1]: Started Ceph node-exporter.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:21:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:01 vm09 ceph-mon[54524]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T20:21:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:01 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:01 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:01 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:01 vm09 ceph-mon[54524]: Deploying daemon node-exporter.b on vm09 2026-03-09T20:21:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:01 vm05 ceph-mon[51870]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T20:21:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:01 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:01 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:01 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:01 vm05 ceph-mon[51870]: Deploying daemon node-exporter.b on vm09 2026-03-09T20:21:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:01 vm05 ceph-mon[61345]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T20:21:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:01 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:01 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:01 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:01 vm05 ceph-mon[61345]: Deploying daemon node-exporter.b on vm09 2026-03-09T20:21:03.272 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:02 vm09 bash[81885]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 bash[81885]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 bash[81885]: Writing manifest to image destination 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 podman[81885]: 2026-03-09 20:21:03.74953385 +0000 UTC m=+2.036832477 container create 52f5c2f42d47c9819e96a9ba283c101756e5943f967b4fca4d6bc53e61f281fa (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 podman[81885]: 2026-03-09 20:21:03.802478542 +0000 UTC m=+2.089777179 container init 52f5c2f42d47c9819e96a9ba283c101756e5943f967b4fca4d6bc53e61f281fa (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 podman[81885]: 2026-03-09 20:21:03.80907546 +0000 UTC m=+2.096374087 container start 52f5c2f42d47c9819e96a9ba283c101756e5943f967b4fca4d6bc53e61f281fa (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 bash[81885]: 52f5c2f42d47c9819e96a9ba283c101756e5943f967b4fca4d6bc53e61f281fa 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 podman[81885]: 2026-03-09 20:21:03.743014948 +0000 UTC m=+2.030313585 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 systemd[1]: Started Ceph node-exporter.b for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.816Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.817Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.817Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.817Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.817Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.817Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T20:21:03.828 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T20:21:03.829 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T20:21:03.829 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-mon[54524]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T20:21:04.023 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T20:21:04.023 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T20:21:04.023 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T20:21:04.023 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T20:21:04.023 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T20:21:04.023 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T20:21:04.023 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T20:21:04.023 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T20:21:04.023 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T20:21:04.024 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T20:21:04.024 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T20:21:04.024 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T20:21:04.024 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T20:21:04.024 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T20:21:04.024 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T20:21:04.024 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:21:03 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b[81939]: ts=2026-03-09T20:21:03.818Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T20:21:04.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:03 vm05 ceph-mon[51870]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T20:21:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:03 vm05 ceph-mon[61345]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T20:21:05.048 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:04 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.048 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:04 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.048 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:04 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.048 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:04 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.048 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:04 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.048 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:04 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:05.048 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:04 vm09 ceph-mon[54524]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:05.048 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:04 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[51870]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[61345]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:04 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:05.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:21:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:21:06.063 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:05 vm05 systemd[1]: Stopping Ceph alertmanager.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[51870]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[51870]: Reconfiguring daemon alertmanager.a on vm05 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[61345]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T20:21:06.389 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:06 vm05 ceph-mon[61345]: Reconfiguring daemon alertmanager.a on vm05 2026-03-09T20:21:06.389 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[92655]: ts=2026-03-09T20:21:06.071Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T20:21:06.389 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 podman[93548]: 2026-03-09 20:21:06.083673903 +0000 UTC m=+0.036567831 container died 5819767588bfdfbe4162967cca9d066dd73bc28af267fbf25a754776716c0fa2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T20:21:06.389 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 podman[93548]: 2026-03-09 20:21:06.205131772 +0000 UTC m=+0.158025700 container remove 5819767588bfdfbe4162967cca9d066dd73bc28af267fbf25a754776716c0fa2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T20:21:06.389 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 podman[93548]: 2026-03-09 20:21:06.206857683 +0000 UTC m=+0.159751611 volume remove 7fa2ba9164284795ec3abc37de33aaa6029c88f66d3653b6290f512d257787b7 2026-03-09T20:21:06.389 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 bash[93548]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a 2026-03-09T20:21:06.390 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@alertmanager.a.service: Deactivated successfully. 2026-03-09T20:21:06.390 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 systemd[1]: Stopped Ceph alertmanager.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:21:06.390 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 systemd[1]: Starting Ceph alertmanager.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:21:06.390 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 podman[93624]: 2026-03-09 20:21:06.389519847 +0000 UTC m=+0.023965690 volume create 23a0b629cfbd1075cc2c1a143810684c31d0d6b4448f91d71f719ad3ac5cb866 2026-03-09T20:21:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:06 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:06 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:06 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:06 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:06 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:06 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:06 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:06 vm09 ceph-mon[54524]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T20:21:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:06 vm09 ceph-mon[54524]: Reconfiguring daemon alertmanager.a on vm05 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 podman[93624]: 2026-03-09 20:21:06.396513088 +0000 UTC m=+0.030958920 container create b433c0522983d3e565dd97caa875523fd403604be74f9407889fe705a6d8329e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 podman[93624]: 2026-03-09 20:21:06.454329771 +0000 UTC m=+0.088775614 container init b433c0522983d3e565dd97caa875523fd403604be74f9407889fe705a6d8329e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 podman[93624]: 2026-03-09 20:21:06.45920711 +0000 UTC m=+0.093652943 container start b433c0522983d3e565dd97caa875523fd403604be74f9407889fe705a6d8329e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 bash[93624]: b433c0522983d3e565dd97caa875523fd403604be74f9407889fe705a6d8329e 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 podman[93624]: 2026-03-09 20:21:06.382427872 +0000 UTC m=+0.016873725 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 systemd[1]: Started Ceph alertmanager.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[93634]: ts=2026-03-09T20:21:06.488Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[93634]: ts=2026-03-09T20:21:06.488Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[93634]: ts=2026-03-09T20:21:06.489Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.105 port=9094 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[93634]: ts=2026-03-09T20:21:06.494Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[93634]: ts=2026-03-09T20:21:06.526Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[93634]: ts=2026-03-09T20:21:06.526Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[93634]: ts=2026-03-09T20:21:06.529Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T20:21:06.660 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:06 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[93634]: ts=2026-03-09T20:21:06.529Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T20:21:07.393 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 systemd[1]: Stopping Ceph prometheus.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:21:07.393Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:21:07.393Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:21:07.393Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:21:07.393Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:21:07.393Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:21:07.393Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:21:07.393Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:21:07.393Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:21:07.394Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:21:07.395Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:21:07.395Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[80032]: ts=2026-03-09T20:21:07.395Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 podman[82630]: 2026-03-09 20:21:07.405771226 +0000 UTC m=+0.030190542 container died 962c244b4fc17d64a1784ff8e1a02685520c91bd06614189f5db8790f22b8716 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 podman[82630]: 2026-03-09 20:21:07.52381999 +0000 UTC m=+0.148239315 container remove 962c244b4fc17d64a1784ff8e1a02685520c91bd06614189f5db8790f22b8716 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T20:21:07.687 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 bash[82630]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a 2026-03-09T20:21:07.688 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@prometheus.a.service: Deactivated successfully. 2026-03-09T20:21:07.688 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 systemd[1]: Stopped Ceph prometheus.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:21:07.688 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 systemd[1]: Starting Ceph prometheus.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:21:07.688 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:07 vm09 ceph-mon[54524]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:07.688 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:07 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:07.688 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:07 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:07.688 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:07 vm09 ceph-mon[54524]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T20:21:07.688 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:07 vm09 ceph-mon[54524]: Reconfiguring daemon prometheus.a on vm09 2026-03-09T20:21:07.787 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:07 vm05 ceph-mon[51870]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:07.787 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:07 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:07.787 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:07 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:07.787 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:07 vm05 ceph-mon[51870]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T20:21:07.787 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:07 vm05 ceph-mon[51870]: Reconfiguring daemon prometheus.a on vm09 2026-03-09T20:21:07.788 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:07 vm05 ceph-mon[61345]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:07.788 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:07 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:07.788 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:07 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:07.788 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:07 vm05 ceph-mon[61345]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T20:21:07.788 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:07 vm05 ceph-mon[61345]: Reconfiguring daemon prometheus.a on vm09 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 podman[82706]: 2026-03-09 20:21:07.688220723 +0000 UTC m=+0.017208212 container create e765eb08e41565fe4fdfd1cf466c36aa2523847ead61f7564f78c307a223e230 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 podman[82706]: 2026-03-09 20:21:07.733858858 +0000 UTC m=+0.062846346 container init e765eb08e41565fe4fdfd1cf466c36aa2523847ead61f7564f78c307a223e230 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 podman[82706]: 2026-03-09 20:21:07.739342172 +0000 UTC m=+0.068329660 container start e765eb08e41565fe4fdfd1cf466c36aa2523847ead61f7564f78c307a223e230 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 bash[82706]: e765eb08e41565fe4fdfd1cf466c36aa2523847ead61f7564f78c307a223e230 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 podman[82706]: 2026-03-09 20:21:07.681308465 +0000 UTC m=+0.010295953 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 systemd[1]: Started Ceph prometheus.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.769Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.769Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.769Z caller=main.go:623 level=info host_details="(Linux 5.14.0-686.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026 x86_64 vm09 (none))" 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.770Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.770Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.773Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.774Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.776Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.776Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.613µs 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.776Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.776Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.776Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.776Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=20.558µs wal_replay_duration=364.372µs wbl_replay_duration=131ns total_replay_duration=481.091µs 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.778Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.778Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.778Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.778Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.778Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.788Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=9.664289ms db_storage=722ns remote_storage=922ns web_handler=110ns query_engine=390ns scrape=1.406433ms scrape_sd=115.637µs notify=7.695µs notify_sd=5.279µs rules=7.861045ms tracing=5.641µs 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.788Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T20:21:08.023 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:21:07 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:21:07.788Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T20:21:08.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:07 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:07] ENGINE Bus STOPPING 2026-03-09T20:21:08.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:07 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:07] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T20:21:08.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:07 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:07] ENGINE Bus STOPPED 2026-03-09T20:21:08.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:07 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:07] ENGINE Bus STARTING 2026-03-09T20:21:08.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:07 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:07] ENGINE Serving on http://:::9283 2026-03-09T20:21:08.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:07 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:07] ENGINE Bus STARTED 2026-03-09T20:21:08.160 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:07 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:07] ENGINE Bus STOPPING 2026-03-09T20:21:08.745 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:08] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T20:21:08.745 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:08] ENGINE Bus STOPPED 2026-03-09T20:21:08.745 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:08] ENGINE Bus STARTING 2026-03-09T20:21:08.745 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:08] ENGINE Serving on http://:::9283 2026-03-09T20:21:08.745 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:08] ENGINE Bus STARTED 2026-03-09T20:21:08.745 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:08] ENGINE Bus STOPPING 2026-03-09T20:21:08.745 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[93634]: ts=2026-03-09T20:21:08.494Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000794589s 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:08.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:08 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:09.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:08 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:09.409 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:09] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T20:21:09.410 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:09] ENGINE Bus STOPPED 2026-03-09T20:21:09.410 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:09] ENGINE Bus STARTING 2026-03-09T20:21:09.410 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:09] ENGINE Serving on http://:::9283 2026-03-09T20:21:09.410 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:09 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: [09/Mar/2026:20:21:09] ENGINE Bus STARTED 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[51870]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[61345]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:09 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:09 vm09 ceph-mon[54524]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:10.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:09 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:09 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:09 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:09 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:10.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:09 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:10.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:09 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:10.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:09 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:12.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:11 vm05 ceph-mon[51870]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:11 vm05 ceph-mon[61345]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:12.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:11 vm09 ceph-mon[54524]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:14.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:13 vm05 ceph-mon[51870]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:13 vm05 ceph-mon[61345]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:14.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:13 vm09 ceph-mon[54524]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:15.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:14 vm05 ceph-mon[51870]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:15.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:14 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:15.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:14 vm05 ceph-mon[61345]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:15.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:14 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:15.213 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:14 vm09 ceph-mon[54524]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:15.213 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:14 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:15.522 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:21:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:21:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:15 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:15 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:16.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:15 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:16.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:16 vm05 ceph-mon[61345]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:16.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:21:16 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[93634]: ts=2026-03-09T20:21:16.498Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.004659559s 2026-03-09T20:21:16.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:16 vm05 ceph-mon[51870]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:17.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:16 vm09 ceph-mon[54524]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:18.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:21:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:21:19.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:19 vm09 ceph-mon[54524]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:19.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:19 vm05 ceph-mon[51870]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:19 vm05 ceph-mon[61345]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:21.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:21 vm05 ceph-mon[51870]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:21.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:21 vm05 ceph-mon[61345]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:21.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:21 vm09 ceph-mon[54524]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:23.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:23 vm05 ceph-mon[51870]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:23 vm05 ceph-mon[61345]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:23.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:23 vm09 ceph-mon[54524]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:25.522 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:21:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:21:26.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:25 vm09 ceph-mon[54524]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:26.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:25 vm05 ceph-mon[51870]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:26.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:25 vm05 ceph-mon[61345]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:27.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:27 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:27.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:27 vm05 ceph-mon[61345]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:27.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:27 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:27 vm05 ceph-mon[51870]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:27.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:27 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:27.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:27 vm09 ceph-mon[54524]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:28.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:21:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:21:29.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:29 vm09 ceph-mon[54524]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:29.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:29 vm05 ceph-mon[51870]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:29.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:29 vm05 ceph-mon[61345]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:30.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:30.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:31.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:31 vm09 ceph-mon[54524]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:31.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:31 vm05 ceph-mon[51870]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:31.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:31 vm05 ceph-mon[61345]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:33.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:33 vm09 ceph-mon[54524]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:33.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:33 vm05 ceph-mon[51870]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:33.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:33 vm05 ceph-mon[61345]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:35.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:35 vm09 ceph-mon[54524]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:35.522 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:21:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:21:35.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:35 vm05 ceph-mon[51870]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:35.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:35 vm05 ceph-mon[61345]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:36.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:36.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:37.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:37 vm05 ceph-mon[51870]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:37.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:37 vm05 ceph-mon[61345]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:37.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:37 vm09 ceph-mon[54524]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:38.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:21:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:21:39.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:39 vm05 ceph-mon[51870]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:39.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:39 vm05 ceph-mon[61345]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:39.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:39 vm09 ceph-mon[54524]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:41.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:41 vm05 ceph-mon[51870]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:41.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:41 vm05 ceph-mon[61345]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:41.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:41 vm09 ceph-mon[54524]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:43 vm05 ceph-mon[51870]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:43 vm05 ceph-mon[61345]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:43.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:43 vm09 ceph-mon[54524]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:44.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T20:21:44.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T20:21:44.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]: dispatch 2026-03-09T20:21:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]: dispatch 2026-03-09T20:21:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-09T20:21:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-09T20:21:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T20:21:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T20:21:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]: dispatch 2026-03-09T20:21:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]: dispatch 2026-03-09T20:21:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-09T20:21:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-09T20:21:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:44 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:44.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:44 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T20:21:44.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:44 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]: dispatch 2026-03-09T20:21:44.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:44 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]: dispatch 2026-03-09T20:21:44.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:44 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]: dispatch 2026-03-09T20:21:44.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:44 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-09T20:21:44.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:44 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-09T20:21:44.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:44 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:45.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:45 vm09 ceph-mon[54524]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:45.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]': finished 2026-03-09T20:21:45.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]': finished 2026-03-09T20:21:45.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]': finished 2026-03-09T20:21:45.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:45 vm09 ceph-mon[54524]: osdmap e60: 8 total, 8 up, 8 in 2026-03-09T20:21:45.522 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:21:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:21:45.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:45 vm05 ceph-mon[51870]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:45.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]': finished 2026-03-09T20:21:45.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]': finished 2026-03-09T20:21:45.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]': finished 2026-03-09T20:21:45.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:45 vm05 ceph-mon[51870]: osdmap e60: 8 total, 8 up, 8 in 2026-03-09T20:21:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:45 vm05 ceph-mon[61345]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:21:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.2", "id": [1, 2]}]': finished 2026-03-09T20:21:45.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]': finished 2026-03-09T20:21:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]': finished 2026-03-09T20:21:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:45 vm05 ceph-mon[61345]: osdmap e60: 8 total, 8 up, 8 in 2026-03-09T20:21:46.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:46.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:46 vm05 ceph-mon[51870]: osdmap e61: 8 total, 8 up, 8 in 2026-03-09T20:21:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:46 vm05 ceph-mon[61345]: osdmap e61: 8 total, 8 up, 8 in 2026-03-09T20:21:46.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:46.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:46 vm09 ceph-mon[54524]: osdmap e61: 8 total, 8 up, 8 in 2026-03-09T20:21:47.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:47 vm05 ceph-mon[51870]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:47 vm05 ceph-mon[51870]: Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T20:21:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:47 vm05 ceph-mon[61345]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:47 vm05 ceph-mon[61345]: Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T20:21:47.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:47 vm09 ceph-mon[54524]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:47.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:47 vm09 ceph-mon[54524]: Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T20:21:48.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:21:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:21:49.253 INFO:tasks.workunit.client.0.vm05.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T20:21:49.253 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T20:21:49.253 INFO:tasks.workunit.client.0.vm05.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T20:21:49.253 INFO:tasks.workunit.client.0.vm05.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T20:21:49.253 INFO:tasks.workunit.client.0.vm05.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T20:21:49.253 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T20:21:49.253 INFO:tasks.workunit.client.0.vm05.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T20:21:49.253 INFO:tasks.workunit.client.0.vm05.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T20:21:49.253 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T20:21:49.253 INFO:tasks.workunit.client.0.vm05.stderr: git switch -c 2026-03-09T20:21:49.253 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T20:21:49.253 INFO:tasks.workunit.client.0.vm05.stderr:Or undo this operation with: 2026-03-09T20:21:49.254 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T20:21:49.254 INFO:tasks.workunit.client.0.vm05.stderr: git switch - 2026-03-09T20:21:49.254 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T20:21:49.254 INFO:tasks.workunit.client.0.vm05.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T20:21:49.254 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T20:21:49.254 INFO:tasks.workunit.client.0.vm05.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T20:21:49.259 DEBUG:teuthology.orchestra.run.vm05:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-09T20:21:49.314 INFO:tasks.workunit.client.0.vm05.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-09T20:21:49.316 INFO:tasks.workunit.client.0.vm05.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T20:21:49.316 INFO:tasks.workunit.client.0.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-09T20:21:49.361 INFO:tasks.workunit.client.0.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-09T20:21:49.392 INFO:tasks.workunit.client.0.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-09T20:21:49.417 INFO:tasks.workunit.client.0.vm05.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T20:21:49.418 INFO:tasks.workunit.client.0.vm05.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T20:21:49.418 INFO:tasks.workunit.client.0.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-09T20:21:49.444 INFO:tasks.workunit.client.0.vm05.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T20:21:49.447 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T20:21:49.447 DEBUG:teuthology.orchestra.run.vm05:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-09T20:21:49.501 INFO:tasks.workunit:Running workunits matching rados/test.sh on client.0... 2026-03-09T20:21:49.502 INFO:tasks.workunit:Running workunit rados/test.sh... 2026-03-09T20:21:49.502 DEBUG:teuthology.orchestra.run.vm05:workunit test rados/test.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh 2026-03-09T20:21:49.560 INFO:tasks.workunit.client.0.vm05.stderr:+ parallel=1 2026-03-09T20:21:49.560 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' '' = --serial ']' 2026-03-09T20:21:49.560 INFO:tasks.workunit.client.0.vm05.stderr:+ crimson=0 2026-03-09T20:21:49.560 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' '' = --crimson ']' 2026-03-09T20:21:49.560 INFO:tasks.workunit.client.0.vm05.stderr:+ color= 2026-03-09T20:21:49.560 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -t 1 ']' 2026-03-09T20:21:49.560 INFO:tasks.workunit.client.0.vm05.stderr:+ trap cleanup EXIT ERR HUP INT QUIT 2026-03-09T20:21:49.560 INFO:tasks.workunit.client.0.vm05.stderr:+ GTEST_OUTPUT_DIR=/home/ubuntu/cephtest/archive/unit_test_xml_report 2026-03-09T20:21:49.560 INFO:tasks.workunit.client.0.vm05.stderr:+ mkdir -p /home/ubuntu/cephtest/archive/unit_test_xml_report 2026-03-09T20:21:49.560 INFO:tasks.workunit.client.0.vm05.stderr:+ declare -A pids 2026-03-09T20:21:49.561 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.561 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.561 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_aio 2026-03-09T20:21:49.561 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_aio' 2026-03-09T20:21:49.561 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_aio 2026-03-09T20:21:49.561 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stdout:test api_aio on pid 94218 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_aio 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94218 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_aio on pid 94218' 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94218 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_aio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio.xml 2>&1 | tee ceph_test_rados_api_aio.log | sed "s/^/ api_aio: /"' 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_aio_pp 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_aio_pp' 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_aio_pp 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.563 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.564 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.564 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.564 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.564 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.564 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.564 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.564 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.565 INFO:tasks.workunit.client.0.vm05.stdout:test api_aio_pp on pid 94226 2026-03-09T20:21:49.565 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_aio_pp 2026-03-09T20:21:49.565 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94226 2026-03-09T20:21:49.565 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_aio_pp on pid 94226' 2026-03-09T20:21:49.565 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94226 2026-03-09T20:21:49.565 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.565 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.565 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_io 2026-03-09T20:21:49.565 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_io' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_aio_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio_pp.xml 2>&1 | tee ceph_test_rados_api_aio_pp.log | sed "s/^/ api_aio_pp: /"' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.566 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.567 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.568 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.568 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.568 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.568 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.568 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.568 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_io 2026-03-09T20:21:49.569 INFO:tasks.workunit.client.0.vm05.stdout:test api_io on pid 94237 2026-03-09T20:21:49.569 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_io 2026-03-09T20:21:49.569 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94237 2026-03-09T20:21:49.569 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_io on pid 94237' 2026-03-09T20:21:49.569 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94237 2026-03-09T20:21:49.569 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.569 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.569 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_io --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io.xml 2>&1 | tee ceph_test_rados_api_io.log | sed "s/^/ api_io: /"' 2026-03-09T20:21:49.569 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_io_pp 2026-03-09T20:21:49.570 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_io_pp' 2026-03-09T20:21:49.570 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.570 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.570 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.570 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.570 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.570 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.571 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_io_pp 2026-03-09T20:21:49.572 INFO:tasks.workunit.client.0.vm05.stdout:test api_io_pp on pid 94245 2026-03-09T20:21:49.572 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_io_pp 2026-03-09T20:21:49.572 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94245 2026-03-09T20:21:49.572 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_io_pp on pid 94245' 2026-03-09T20:21:49.572 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94245 2026-03-09T20:21:49.572 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.572 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.572 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_io_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io_pp.xml 2>&1 | tee ceph_test_rados_api_io_pp.log | sed "s/^/ api_io_pp: /"' 2026-03-09T20:21:49.572 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.573 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.574 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_asio 2026-03-09T20:21:49.574 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_asio' 2026-03-09T20:21:49.574 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.574 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.574 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.574 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.574 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.574 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.574 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.574 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.574 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.575 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.575 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.575 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.575 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.575 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.575 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_asio 2026-03-09T20:21:49.575 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.577 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.577 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.577 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.577 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.577 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.577 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.577 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.577 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.577 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.578 INFO:tasks.workunit.client.0.vm05.stdout:test api_asio on pid 94260 2026-03-09T20:21:49.578 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_asio 2026-03-09T20:21:49.578 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94260 2026-03-09T20:21:49.578 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_asio on pid 94260' 2026-03-09T20:21:49.578 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94260 2026-03-09T20:21:49.578 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.578 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.578 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.578 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_list 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94218/exe ']' 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_list' 2026-03-09T20:21:49.579 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_asio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_asio.xml 2>&1 | tee ceph_test_rados_api_asio.log | sed "s/^/ api_asio: /"' 2026-03-09T20:21:49.580 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.580 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.580 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.580 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.580 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.580 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.580 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.580 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.580 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.582 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.583 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.583 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.583 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.583 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.583 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.583 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.583 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.583 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.583 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.583 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.583 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.583 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.583 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_list 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94218/exe 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stdout:test api_list on pid 94275 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_list 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94275 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_list on pid 94275' 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94275 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.584 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_lock 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_lock' 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_list --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_list.xml 2>&1 | tee ceph_test_rados_api_list.log | sed "s/^/ api_list: /"' 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.586 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.587 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.587 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.587 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.587 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.587 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.587 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.587 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.587 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.587 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.587 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.588 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.588 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.588 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.588 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.588 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.588 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.588 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.588 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.588 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.588 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.588 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.589 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.590 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.590 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.590 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.590 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.590 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.590 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.590 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.590 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.590 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.590 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_aio: /' 2026-03-09T20:21:49.590 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_aio.log 2026-03-09T20:21:49.590 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.592 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_lock 2026-03-09T20:21:49.593 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_aio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio.xml 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94226/exe ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stdout:test api_lock on pid 94295 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_lock 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94295 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_lock on pid 94295' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94295 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.594 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.596 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_lock_pp 2026-03-09T20:21:49.597 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_lock_pp' 2026-03-09T20:21:49.597 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_lock --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock.xml 2>&1 | tee ceph_test_rados_api_lock.log | sed "s/^/ api_lock: /"' 2026-03-09T20:21:49.598 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.598 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.598 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.598 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.598 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.598 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.598 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.598 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.598 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.598 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.598 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.599 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.599 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.599 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.599 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.599 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.600 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.600 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.600 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.600 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.600 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.600 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.600 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.600 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.601 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.601 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.601 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.601 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.601 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.601 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.601 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.601 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.601 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.601 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.601 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.601 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.601 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94226/exe 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.602 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.604 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.604 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.604 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.604 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.604 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.604 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.604 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.604 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.604 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.604 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.604 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.604 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.604 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.605 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.605 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.605 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.605 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.605 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.605 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_aio_pp: /' 2026-03-09T20:21:49.605 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_aio_pp.log 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.606 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.607 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_aio_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio_pp.xml 2026-03-09T20:21:49.608 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_lock_pp 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stdout:test api_lock_pp on pid 94319 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_lock_pp 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94319 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_lock_pp on pid 94319' 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94319 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.609 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.610 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94237/exe ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.611 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.612 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.612 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.612 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.612 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.612 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94245/exe ']' 2026-03-09T20:21:49.612 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.613 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_misc 2026-03-09T20:21:49.613 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_misc' 2026-03-09T20:21:49.616 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_lock_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock_pp.xml 2>&1 | tee ceph_test_rados_api_lock_pp.log | sed "s/^/ api_lock_pp: /"' 2026-03-09T20:21:49.617 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.617 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.617 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.617 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.617 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.617 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.619 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.620 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94245/exe 2026-03-09T20:21:49.620 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.620 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.621 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94237/exe 2026-03-09T20:21:49.621 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.621 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.621 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.621 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.621 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.621 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.621 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.621 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.621 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.621 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.621 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.621 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.622 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_misc 2026-03-09T20:21:49.623 INFO:tasks.workunit.client.0.vm05.stdout:test api_misc on pid 94341 2026-03-09T20:21:49.623 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_misc 2026-03-09T20:21:49.623 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_io_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io_pp.xml 2026-03-09T20:21:49.623 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94341 2026-03-09T20:21:49.623 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_misc on pid 94341' 2026-03-09T20:21:49.623 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94341 2026-03-09T20:21:49.623 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.623 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.623 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_io_pp.log 2026-03-09T20:21:49.623 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_io_pp: /' 2026-03-09T20:21:49.625 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.625 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.625 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.625 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.625 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.625 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.625 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.625 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94260/exe ']' 2026-03-09T20:21:49.628 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.629 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.629 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.629 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.629 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.629 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.629 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.629 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.629 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.629 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.629 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.629 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.629 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.630 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_misc --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc.xml 2>&1 | tee ceph_test_rados_api_misc.log | sed "s/^/ api_misc: /"' 2026-03-09T20:21:49.630 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_misc_pp 2026-03-09T20:21:49.630 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_misc_pp' 2026-03-09T20:21:49.631 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.633 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.633 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.633 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.633 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.633 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.633 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.633 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.633 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.633 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.634 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.634 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.634 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.634 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.634 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.634 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.636 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_io.log 2026-03-09T20:21:49.636 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_io: /' 2026-03-09T20:21:49.637 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_io --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io.xml 2026-03-09T20:21:49.642 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.642 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.642 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.642 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.642 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.642 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94260/exe 2026-03-09T20:21:49.643 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.645 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.646 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_asio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_asio.xml 2026-03-09T20:21:49.650 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_misc_pp 2026-03-09T20:21:49.651 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.652 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94275/exe ']' 2026-03-09T20:21:49.653 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_asio.log 2026-03-09T20:21:49.654 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_asio: /' 2026-03-09T20:21:49.657 INFO:tasks.workunit.client.0.vm05.stdout:test api_misc_pp on pid 94385 2026-03-09T20:21:49.657 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_misc_pp 2026-03-09T20:21:49.657 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94385 2026-03-09T20:21:49.657 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_misc_pp on pid 94385' 2026-03-09T20:21:49.657 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94385 2026-03-09T20:21:49.657 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.657 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.659 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_misc_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc_pp.xml 2>&1 | tee ceph_test_rados_api_misc_pp.log | sed "s/^/ api_misc_pp: /"' 2026-03-09T20:21:49.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:49 vm05 ceph-mon[51870]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:49.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:49 vm05 ceph-mon[61345]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_tier_pp 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_tier_pp' 2026-03-09T20:21:49.662 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.663 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.663 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.663 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.663 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.663 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.664 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.664 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94295/exe ']' 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94295/exe 2026-03-09T20:21:49.665 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94275/exe 2026-03-09T20:21:49.666 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.667 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_lock --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock.xml 2026-03-09T20:21:49.668 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.668 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.668 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.668 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.668 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.668 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.668 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.668 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.668 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.668 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.668 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.668 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.669 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_lock.log 2026-03-09T20:21:49.669 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_lock: /' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.673 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_list --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_list.xml 2026-03-09T20:21:49.673 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_list: /' 2026-03-09T20:21:49.673 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_tier_pp 2026-03-09T20:21:49.673 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_list.log 2026-03-09T20:21:49.674 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.675 INFO:tasks.workunit.client.0.vm05.stdout:test api_tier_pp on pid 94427 2026-03-09T20:21:49.675 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_tier_pp 2026-03-09T20:21:49.675 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94427 2026-03-09T20:21:49.675 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_tier_pp on pid 94427' 2026-03-09T20:21:49.675 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94427 2026-03-09T20:21:49.675 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.675 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.675 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.676 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.676 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.676 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.676 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.676 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.676 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.676 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.680 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_tier_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_tier_pp.xml 2>&1 | tee ceph_test_rados_api_tier_pp.log | sed "s/^/ api_tier_pp: /"' 2026-03-09T20:21:49.680 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.681 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.681 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.681 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.681 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.681 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.681 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_pool 2026-03-09T20:21:49.681 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_pool' 2026-03-09T20:21:49.683 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.684 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.686 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.686 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.686 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.686 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.686 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.686 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.690 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.690 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.690 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.690 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.690 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.690 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.690 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.690 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.690 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.690 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.690 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.691 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.700 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.701 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.703 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_pool 2026-03-09T20:21:49.703 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.705 INFO:tasks.workunit.client.0.vm05.stdout:test api_pool on pid 94471 2026-03-09T20:21:49.705 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_pool 2026-03-09T20:21:49.705 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94471 2026-03-09T20:21:49.705 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_pool on pid 94471' 2026-03-09T20:21:49.706 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94471 2026-03-09T20:21:49.706 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.706 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.711 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.713 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_pool --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_pool.xml 2>&1 | tee ceph_test_rados_api_pool.log | sed "s/^/ api_pool: /"' 2026-03-09T20:21:49.713 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_snapshots 2026-03-09T20:21:49.714 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_snapshots' 2026-03-09T20:21:49.715 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.715 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.721 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.721 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.725 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.725 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.725 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.725 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.725 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.725 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.725 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.725 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.725 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.725 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.725 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.725 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.726 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.726 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.726 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.726 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.726 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.726 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.726 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.726 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.726 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.726 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.726 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.726 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.726 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94319/exe ']' 2026-03-09T20:21:49.727 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.727 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.727 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.728 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.728 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.728 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.728 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.728 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.728 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.732 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.733 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.733 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.733 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.733 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.733 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.734 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.734 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.734 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.734 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.734 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.736 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.737 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.737 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.737 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.737 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.738 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.738 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.738 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.738 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.738 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.738 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.738 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.738 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.738 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.738 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.738 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_snapshots 2026-03-09T20:21:49.738 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.739 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.740 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stdout:test api_snapshots on pid 94549 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_snapshots 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94549 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_snapshots on pid 94549' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94549 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94427/exe ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_snapshots_pp 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.741 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.742 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.742 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.742 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94341/exe ']' 2026-03-09T20:21:49.742 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_snapshots_pp' 2026-03-09T20:21:49.742 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94341/exe 2026-03-09T20:21:49.742 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94319/exe 2026-03-09T20:21:49.742 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.742 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_snapshots --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots.xml 2>&1 | tee ceph_test_rados_api_snapshots.log | sed "s/^/ api_snapshots: /"' 2026-03-09T20:21:49.743 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.744 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.744 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.744 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.744 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.744 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.744 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.744 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.744 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.744 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.744 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.744 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.745 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.745 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.746 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.746 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_snapshots_pp 2026-03-09T20:21:49.748 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_snapshots_pp 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_misc --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc.xml 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stdout:test api_snapshots_pp on pid 94568 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94568 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_snapshots_pp on pid 94568' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94568 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_misc.log 2026-03-09T20:21:49.749 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_misc: /' 2026-03-09T20:21:49.750 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.750 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.750 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.750 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.750 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.750 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.751 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94427/exe 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.752 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94385/exe ']' 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_lock_pp.log 2026-03-09T20:21:49.753 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_lock_pp: /' 2026-03-09T20:21:49.754 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_snapshots_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots_pp.xml 2>&1 | tee ceph_test_rados_api_snapshots_pp.log | sed "s/^/ api_snapshots_pp: /"' 2026-03-09T20:21:49.754 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_stat 2026-03-09T20:21:49.754 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_stat' 2026-03-09T20:21:49.755 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.755 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.755 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.755 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.755 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.755 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.755 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.755 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.756 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_lock_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock_pp.xml 2026-03-09T20:21:49.756 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.756 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.756 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.756 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.756 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.756 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.759 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_tier_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_tier_pp.xml 2026-03-09T20:21:49.759 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_tier_pp.log 2026-03-09T20:21:49.760 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_tier_pp: /' 2026-03-09T20:21:49.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.762 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.762 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.762 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.762 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.763 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.765 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_stat 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.767 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.770 INFO:tasks.workunit.client.0.vm05.stdout:test api_stat on pid 94589 2026-03-09T20:21:49.770 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_stat 2026-03-09T20:21:49.770 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94589 2026-03-09T20:21:49.770 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_stat on pid 94589' 2026-03-09T20:21:49.770 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94589 2026-03-09T20:21:49.770 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.770 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.770 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.771 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.771 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.771 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.771 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.771 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.771 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.771 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.771 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94385/exe 2026-03-09T20:21:49.771 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.772 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.772 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.772 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.772 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.772 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.772 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.772 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.772 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.772 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:49 vm09 ceph-mon[54524]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:21:49.772 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.772 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.772 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.774 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_stat --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat.xml 2>&1 | tee ceph_test_rados_api_stat.log | sed "s/^/ api_stat: /"' 2026-03-09T20:21:49.774 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.774 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.774 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.774 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_stat_pp 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_stat_pp' 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.775 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.776 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.776 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.776 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.776 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.776 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.776 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.776 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.778 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.779 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_misc_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc_pp.xml 2026-03-09T20:21:49.782 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_misc_pp.log 2026-03-09T20:21:49.782 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_misc_pp: /' 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.784 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.788 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_stat_pp 2026-03-09T20:21:49.788 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.790 INFO:tasks.workunit.client.0.vm05.stdout:test api_stat_pp on pid 94618 2026-03-09T20:21:49.790 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_stat_pp 2026-03-09T20:21:49.790 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94618 2026-03-09T20:21:49.790 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_stat_pp on pid 94618' 2026-03-09T20:21:49.790 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94618 2026-03-09T20:21:49.790 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.790 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.790 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.790 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.790 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.790 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.790 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.791 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.795 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_stat_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat_pp.xml 2>&1 | tee ceph_test_rados_api_stat_pp.log | sed "s/^/ api_stat_pp: /"' 2026-03-09T20:21:49.798 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.798 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.798 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.798 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.798 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.798 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.798 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.798 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.798 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.798 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.799 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.799 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.799 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.799 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.799 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.799 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.799 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.799 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.799 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.799 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.799 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_watch_notify 2026-03-09T20:21:49.799 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_watch_notify' 2026-03-09T20:21:49.802 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.803 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.803 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.803 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.803 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.803 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.803 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.804 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.804 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.805 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.805 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.805 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.805 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.805 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.805 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.806 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.806 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94549/exe ']' 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94549/exe 2026-03-09T20:21:49.807 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.808 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.808 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.808 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.808 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.808 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.808 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.808 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.808 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.808 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.808 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.808 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.808 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.808 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_snapshots --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots.xml 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94471/exe ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.809 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.816 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_snapshots.log 2026-03-09T20:21:49.816 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.816 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.816 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.816 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.816 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.816 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.816 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.816 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.816 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.817 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_snapshots: /' 2026-03-09T20:21:49.823 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.823 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.823 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.823 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.823 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.823 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.823 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.823 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.823 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.823 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.825 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_watch_notify 2026-03-09T20:21:49.827 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94471/exe 2026-03-09T20:21:49.827 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stdout:test api_watch_notify on pid 94695 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_watch_notify 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94695 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_watch_notify on pid 94695' 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94695 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.829 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.832 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.833 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_pool --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_pool.xml 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.834 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.835 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_pool.log 2026-03-09T20:21:49.835 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_pool: /' 2026-03-09T20:21:49.836 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_watch_notify --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify.xml 2>&1 | tee ceph_test_rados_api_watch_notify.log | sed "s/^/ api_watch_notify: /"' 2026-03-09T20:21:49.836 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.837 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.837 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.837 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.837 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.837 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.837 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_watch_notify_pp 2026-03-09T20:21:49.838 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_watch_notify_pp' 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.839 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.840 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.840 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.840 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.840 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.840 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.840 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.841 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.842 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.842 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.842 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.842 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.842 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.842 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.842 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.842 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.842 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.842 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.844 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.844 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.844 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.845 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.845 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.845 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.846 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_watch_notify_pp 2026-03-09T20:21:49.848 INFO:tasks.workunit.client.0.vm05.stdout:test api_watch_notify_pp on pid 94733 2026-03-09T20:21:49.848 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_watch_notify_pp 2026-03-09T20:21:49.848 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94733 2026-03-09T20:21:49.848 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_watch_notify_pp on pid 94733' 2026-03-09T20:21:49.848 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94733 2026-03-09T20:21:49.848 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.848 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.848 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.848 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.848 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.848 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.848 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.848 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.850 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.851 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94568/exe ']' 2026-03-09T20:21:49.854 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94589/exe ']' 2026-03-09T20:21:49.859 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.859 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.859 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.859 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.859 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.861 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94568/exe 2026-03-09T20:21:49.861 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.862 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.862 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.862 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.862 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.862 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.862 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.862 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.862 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.862 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_cmd 2026-03-09T20:21:49.862 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_watch_notify_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify_pp.xml 2>&1 | tee ceph_test_rados_api_watch_notify_pp.log | sed "s/^/ api_watch_notify_pp: /"' 2026-03-09T20:21:49.862 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_cmd' 2026-03-09T20:21:49.863 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.863 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.863 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.863 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94618/exe ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_snapshots_pp: /' 2026-03-09T20:21:49.864 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_snapshots_pp.log 2026-03-09T20:21:49.865 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_snapshots_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots_pp.xml 2026-03-09T20:21:49.867 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.867 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94589/exe 2026-03-09T20:21:49.867 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.867 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.867 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.867 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.867 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.867 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.867 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.867 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.868 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.868 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.868 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.868 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.868 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.868 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.869 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.869 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.869 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.869 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.869 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.869 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.869 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.869 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.869 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.869 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.869 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.870 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.870 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.870 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.870 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.871 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_cmd 2026-03-09T20:21:49.871 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.871 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94618/exe 2026-03-09T20:21:49.872 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_stat.log 2026-03-09T20:21:49.872 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_stat: /' 2026-03-09T20:21:49.873 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_stat --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat.xml 2026-03-09T20:21:49.873 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.874 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.874 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.874 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.874 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.874 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.874 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.874 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.874 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.874 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.874 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.874 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.874 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.877 INFO:tasks.workunit.client.0.vm05.stdout:test api_cmd on pid 94779 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_cmd 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94779 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_cmd on pid 94779' 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94779 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.878 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.880 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_cmd_pp 2026-03-09T20:21:49.880 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_cmd_pp' 2026-03-09T20:21:49.880 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_stat_pp: /' 2026-03-09T20:21:49.881 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_stat_pp.log 2026-03-09T20:21:49.881 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_stat_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat_pp.xml 2026-03-09T20:21:49.883 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_cmd --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd.xml 2>&1 | tee ceph_test_rados_api_cmd.log | sed "s/^/ api_cmd: /"' 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.884 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.885 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.885 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.885 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.886 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.886 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.886 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.886 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.886 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.886 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.886 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.886 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.886 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.886 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.886 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.886 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.887 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.890 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.891 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.891 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.891 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.891 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.891 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.891 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.891 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.892 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_cmd_pp 2026-03-09T20:21:49.892 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.894 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.896 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.896 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.896 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.896 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.896 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.896 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.897 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94733/exe ']' 2026-03-09T20:21:49.898 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94733/exe 2026-03-09T20:21:49.899 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.899 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.899 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.899 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stdout:test api_cmd_pp on pid 94825 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_cmd_pp 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94825 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_cmd_pp on pid 94825' 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_watch_notify_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify_pp.xml 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94825 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.900 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.902 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_cmd_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd_pp.xml 2>&1 | tee ceph_test_rados_api_cmd_pp.log | sed "s/^/ api_cmd_pp: /"' 2026-03-09T20:21:49.903 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.903 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.903 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.903 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.903 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.903 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.910 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_service 2026-03-09T20:21:49.910 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_service' 2026-03-09T20:21:49.910 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94695/exe ']' 2026-03-09T20:21:49.917 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_watch_notify_pp.log 2026-03-09T20:21:49.919 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_watch_notify_pp: /' 2026-03-09T20:21:49.919 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.919 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.919 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.919 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.919 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.919 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.919 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.919 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.921 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94695/exe 2026-03-09T20:21:49.922 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.922 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.922 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.922 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.922 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_service 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.923 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.924 INFO:tasks.workunit.client.0.vm05.stdout:test api_service on pid 94852 2026-03-09T20:21:49.924 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_service 2026-03-09T20:21:49.924 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94852 2026-03-09T20:21:49.924 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_service on pid 94852' 2026-03-09T20:21:49.924 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94852 2026-03-09T20:21:49.924 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.924 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_service --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service.xml 2>&1 | tee ceph_test_rados_api_service.log | sed "s/^/ api_service: /"' 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.925 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.926 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.926 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.926 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.926 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.926 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.926 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.926 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.926 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.926 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.926 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.926 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.926 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.926 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.927 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.927 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.927 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.927 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_service_pp 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_service_pp' 2026-03-09T20:21:49.929 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_watch_notify --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify.xml 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.930 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.931 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.931 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.931 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.931 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.931 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.932 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.932 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.932 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.932 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.932 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.932 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.932 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.932 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.932 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.932 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_watch_notify.log 2026-03-09T20:21:49.932 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.932 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_watch_notify: /' 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.934 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94779/exe ']' 2026-03-09T20:21:49.935 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.935 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.935 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.935 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.935 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.935 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.935 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.935 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94779/exe 2026-03-09T20:21:49.936 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.937 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.937 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.937 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.937 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.937 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.937 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.937 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.937 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.937 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.937 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.937 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.937 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.937 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.938 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.938 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.938 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.938 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.938 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.938 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_cmd --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd.xml 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.939 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_service_pp 2026-03-09T20:21:49.941 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_cmd.log 2026-03-09T20:21:49.941 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_cmd: /' 2026-03-09T20:21:49.942 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.943 INFO:tasks.workunit.client.0.vm05.stdout:test api_service_pp on pid 94889 2026-03-09T20:21:49.943 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_service_pp 2026-03-09T20:21:49.943 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94889 2026-03-09T20:21:49.943 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_service_pp on pid 94889' 2026-03-09T20:21:49.943 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94889 2026-03-09T20:21:49.943 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.943 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.946 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:49.946 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:49.946 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:49.947 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94852/exe ']' 2026-03-09T20:21:49.949 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_c_write_operations 2026-03-09T20:21:49.949 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_c_write_operations' 2026-03-09T20:21:49.953 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_service_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service_pp.xml 2>&1 | tee ceph_test_rados_api_service_pp.log | sed "s/^/ api_service_pp: /"' 2026-03-09T20:21:49.955 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:49.955 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:49.955 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:49.955 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.955 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:49.955 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.955 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:49.955 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:49.955 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.958 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.958 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.958 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.958 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.958 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.958 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.959 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94852/exe 2026-03-09T20:21:49.960 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:49.960 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:49.960 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:49.960 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:49.961 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:49.961 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:49.961 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:49.961 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:49.961 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:49.961 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:49.961 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:49.961 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:49.961 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:49.964 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_c_write_operations 2026-03-09T20:21:49.964 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.964 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.964 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.964 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.964 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.964 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.964 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.964 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.964 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.964 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_service --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service.xml 2026-03-09T20:21:49.965 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_service.log 2026-03-09T20:21:49.966 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_service: /' 2026-03-09T20:21:49.971 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:49.971 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:49.971 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stdout:test api_c_write_operations on pid 94940 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_c_write_operations 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94940 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_c_write_operations on pid 94940' 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94940 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.972 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.974 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_c_write_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_write_operations.xml 2>&1 | tee ceph_test_rados_api_c_write_operations.log | sed "s/^/ api_c_write_operations: /"' 2026-03-09T20:21:49.975 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_c_read_operations 2026-03-09T20:21:49.975 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.975 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_c_read_operations' 2026-03-09T20:21:49.982 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.982 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.982 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.982 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.982 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.982 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.983 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.984 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.984 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.984 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.988 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_c_read_operations 2026-03-09T20:21:49.988 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:49.990 INFO:tasks.workunit.client.0.vm05.stdout:test api_c_read_operations on pid 94964 2026-03-09T20:21:49.990 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_c_read_operations 2026-03-09T20:21:49.990 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94964 2026-03-09T20:21:49.990 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_c_read_operations on pid 94964' 2026-03-09T20:21:49.990 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94964 2026-03-09T20:21:49.990 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:49.990 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:49.991 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s list_parallel 2026-03-09T20:21:49.991 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:49.991 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:49.991 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:49.991 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:49.991 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:49.991 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' list_parallel' 2026-03-09T20:21:49.993 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_c_read_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_read_operations.xml 2>&1 | tee ceph_test_rados_api_c_read_operations.log | sed "s/^/ api_c_read_operations: /"' 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:49.995 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:49.996 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.000 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.000 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.001 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94825/exe ']' 2026-03-09T20:21:50.003 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.003 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.003 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.003 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.003 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.003 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.003 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.003 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.003 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.004 INFO:tasks.workunit.client.0.vm05.stderr:++ echo list_parallel 2026-03-09T20:21:50.005 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.008 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=list_parallel 2026-03-09T20:21:50.008 INFO:tasks.workunit.client.0.vm05.stdout:test list_parallel on pid 94996 2026-03-09T20:21:50.008 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94996 2026-03-09T20:21:50.008 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test list_parallel on pid 94996' 2026-03-09T20:21:50.008 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=94996 2026-03-09T20:21:50.008 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:50.008 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.009 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94825/exe 2026-03-09T20:21:50.010 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.010 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.010 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.010 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.010 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.010 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.010 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.010 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.010 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.011 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.011 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.011 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.011 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.012 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s open_pools_parallel 2026-03-09T20:21:50.012 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_list_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/list_parallel.xml 2>&1 | tee ceph_test_rados_list_parallel.log | sed "s/^/ list_parallel: /"' 2026-03-09T20:21:50.012 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' open_pools_parallel' 2026-03-09T20:21:50.015 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.017 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.019 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_cmd_pp: /' 2026-03-09T20:21:50.019 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.019 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.019 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.019 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.020 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.022 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.022 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.022 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.022 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.022 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_cmd_pp.log 2026-03-09T20:21:50.022 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.022 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.022 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.022 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.022 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.022 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.022 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.023 INFO:tasks.workunit.client.0.vm05.stderr:++ echo open_pools_parallel 2026-03-09T20:21:50.023 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_cmd_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd_pp.xml 2026-03-09T20:21:50.023 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stdout:test open_pools_parallel on pid 95025 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=open_pools_parallel 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95025 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test open_pools_parallel on pid 95025' 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95025 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.026 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.027 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.027 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.027 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.027 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.027 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.027 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.027 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s delete_pools_parallel 2026-03-09T20:21:50.027 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' delete_pools_parallel' 2026-03-09T20:21:50.027 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_open_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/open_pools_parallel.xml 2>&1 | tee ceph_test_rados_open_pools_parallel.log | sed "s/^/ open_pools_parallel: /"' 2026-03-09T20:21:50.027 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.028 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.028 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.028 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.028 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.028 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.028 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.029 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.030 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.030 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.030 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.030 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.030 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.030 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.030 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.030 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.032 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.032 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.032 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.032 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.032 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.032 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.032 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.032 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.032 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.032 INFO:tasks.workunit.client.0.vm05.stderr:++ echo delete_pools_parallel 2026-03-09T20:21:50.033 INFO:tasks.workunit.client.0.vm05.stdout:test delete_pools_parallel on pid 95043 2026-03-09T20:21:50.033 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=delete_pools_parallel 2026-03-09T20:21:50.033 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95043 2026-03-09T20:21:50.033 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test delete_pools_parallel on pid 95043' 2026-03-09T20:21:50.033 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95043 2026-03-09T20:21:50.033 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T20:21:50.033 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.034 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.034 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.034 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.034 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.034 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.034 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.034 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.034 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.034 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.035 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.035 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.035 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.035 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.035 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.035 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s cls 2026-03-09T20:21:50.035 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' cls' 2026-03-09T20:21:50.036 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_delete_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/delete_pools_parallel.xml 2>&1 | tee ceph_test_rados_delete_pools_parallel.log | sed "s/^/ delete_pools_parallel: /"' 2026-03-09T20:21:50.038 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.038 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.038 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.040 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.040 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.040 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.040 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.040 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.040 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.040 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.040 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.040 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.040 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.040 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.040 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.041 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.042 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.042 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.042 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.042 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.042 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.042 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.042 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.042 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.042 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.042 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.043 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.044 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.044 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.044 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.044 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.044 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.044 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.044 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.045 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.045 INFO:tasks.workunit.client.0.vm05.stderr:++ echo cls 2026-03-09T20:21:50.046 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.046 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.048 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.049 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.049 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.049 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.049 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.049 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.049 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.049 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.049 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94889/exe ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.050 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.051 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.051 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94996/exe ']' 2026-03-09T20:21:50.051 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.051 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.051 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.051 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.051 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.051 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.052 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.053 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.053 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.053 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.053 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94964/exe ']' 2026-03-09T20:21:50.053 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94996/exe 2026-03-09T20:21:50.053 INFO:tasks.workunit.client.0.vm05.stdout:test cls on pid 95077 2026-03-09T20:21:50.053 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=cls 2026-03-09T20:21:50.053 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95077 2026-03-09T20:21:50.053 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test cls on pid 95077' 2026-03-09T20:21:50.053 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95077 2026-03-09T20:21:50.053 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T20:21:50.053 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.054 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s cmd 2026-03-09T20:21:50.055 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' cmd' 2026-03-09T20:21:50.055 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_cls 2>&1 | tee ceph_test_neorados_cls.log | sed "s/^/ cls: /"' 2026-03-09T20:21:50.056 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.056 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.056 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.056 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.056 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.056 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.056 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ list_parallel: /' 2026-03-09T20:21:50.057 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.057 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.057 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.057 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.057 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.057 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.057 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.057 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.057 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.057 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_list_parallel.log 2026-03-09T20:21:50.058 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_list_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/list_parallel.xml 2026-03-09T20:21:50.059 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.059 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.059 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.059 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.059 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.059 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94964/exe 2026-03-09T20:21:50.061 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94889/exe 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.062 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.063 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.063 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.063 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.063 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.063 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.063 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.063 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.064 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.064 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.064 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.064 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.064 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.064 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.064 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.064 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.064 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.064 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.064 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.064 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.065 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.067 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.068 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.069 INFO:tasks.workunit.client.0.vm05.stderr:++ echo cmd 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_service_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service_pp.xml 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/94940/exe ']' 2026-03-09T20:21:50.070 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_service_pp.log 2026-03-09T20:21:50.071 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_c_read_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_read_operations.xml 2026-03-09T20:21:50.071 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_service_pp: /' 2026-03-09T20:21:50.073 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_c_read_operations.log 2026-03-09T20:21:50.074 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_c_read_operations: /' 2026-03-09T20:21:50.080 INFO:tasks.workunit.client.0.vm05.stdout:test cmd on pid 95120 2026-03-09T20:21:50.080 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=cmd 2026-03-09T20:21:50.080 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95120 2026-03-09T20:21:50.080 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test cmd on pid 95120' 2026-03-09T20:21:50.080 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95120 2026-03-09T20:21:50.080 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T20:21:50.080 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.085 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_cmd 2>&1 | tee ceph_test_neorados_cmd.log | sed "s/^/ cmd: /"' 2026-03-09T20:21:50.086 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.087 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.087 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.087 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.087 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.087 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.087 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s handler_error 2026-03-09T20:21:50.087 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' handler_error' 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.089 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.090 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.090 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.090 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.090 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.090 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.091 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.091 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.091 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.091 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.091 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.091 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.091 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.091 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.092 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.092 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.092 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.092 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.092 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.092 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.092 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.092 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.092 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.134 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.134 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.134 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.134 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.134 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.134 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.135 INFO:tasks.workunit.client.0.vm05.stderr:++ echo handler_error 2026-03-09T20:21:50.136 INFO:tasks.workunit.client.0.vm05.stdout:test handler_error on pid 95150 2026-03-09T20:21:50.136 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=handler_error 2026-03-09T20:21:50.136 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95150 2026-03-09T20:21:50.136 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test handler_error on pid 95150' 2026-03-09T20:21:50.136 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95150 2026-03-09T20:21:50.137 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T20:21:50.137 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.137 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.138 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/94940/exe 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.139 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95025/exe ']' 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.141 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.142 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s io 2026-03-09T20:21:50.142 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' io' 2026-03-09T20:21:50.142 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_handler_error 2>&1 | tee ceph_test_neorados_handler_error.log | sed "s/^/ handler_error: /"' 2026-03-09T20:21:50.143 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.143 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.143 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.143 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.143 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.143 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.144 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_c_write_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_write_operations.xml 2026-03-09T20:21:50.144 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.144 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.144 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.144 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.144 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.144 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.144 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.144 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.144 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.146 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_c_write_operations.log 2026-03-09T20:21:50.149 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.149 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_c_write_operations: /' 2026-03-09T20:21:50.150 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.151 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.151 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.151 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.151 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.151 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.151 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.151 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.152 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.152 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.152 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.152 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.152 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.152 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.152 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.152 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.152 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.154 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95025/exe 2026-03-09T20:21:50.154 INFO:tasks.workunit.client.0.vm05.stderr:++ echo io 2026-03-09T20:21:50.155 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.156 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.156 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.156 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.156 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.156 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.156 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.156 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.156 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.157 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.157 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.157 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.157 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.157 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.158 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.158 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.158 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.158 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.158 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.162 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.164 INFO:tasks.workunit.client.0.vm05.stdout:test io on pid 95186 2026-03-09T20:21:50.164 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=io 2026-03-09T20:21:50.164 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95186 2026-03-09T20:21:50.164 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test io on pid 95186' 2026-03-09T20:21:50.164 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95186 2026-03-09T20:21:50.164 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T20:21:50.164 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.164 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_open_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/open_pools_parallel.xml 2026-03-09T20:21:50.164 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_open_pools_parallel.log 2026-03-09T20:21:50.164 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ open_pools_parallel: /' 2026-03-09T20:21:50.165 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.166 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95043/exe ']' 2026-03-09T20:21:50.168 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.170 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.171 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.175 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s ec_io 2026-03-09T20:21:50.175 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' ec_io' 2026-03-09T20:21:50.175 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_io 2>&1 | tee ceph_test_neorados_io.log | sed "s/^/ io: /"' 2026-03-09T20:21:50.176 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.177 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95043/exe 2026-03-09T20:21:50.177 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.178 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.178 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.179 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.179 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.179 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.179 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.179 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.179 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.179 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.179 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.179 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.179 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.179 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.180 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.180 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.180 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.180 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.180 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.180 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.183 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.183 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.183 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.183 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.183 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95077/exe ']' 2026-03-09T20:21:50.185 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.185 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.185 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.185 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.185 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.185 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.185 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.185 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.185 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.185 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.186 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.186 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.186 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.186 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.186 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.186 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.186 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.186 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_delete_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/delete_pools_parallel.xml 2026-03-09T20:21:50.186 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_delete_pools_parallel.log 2026-03-09T20:21:50.187 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ delete_pools_parallel: /' 2026-03-09T20:21:50.188 INFO:tasks.workunit.client.0.vm05.stderr:++ echo ec_io 2026-03-09T20:21:50.188 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.191 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.195 INFO:tasks.workunit.client.0.vm05.stdout:test ec_io on pid 95225 2026-03-09T20:21:50.195 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=ec_io 2026-03-09T20:21:50.195 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95225 2026-03-09T20:21:50.195 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test ec_io on pid 95225' 2026-03-09T20:21:50.195 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95225 2026-03-09T20:21:50.195 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T20:21:50.195 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.199 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95120/exe ']' 2026-03-09T20:21:50.200 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_ec_io 2>&1 | tee ceph_test_neorados_ec_io.log | sed "s/^/ ec_io: /"' 2026-03-09T20:21:50.200 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s list 2026-03-09T20:21:50.201 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' list' 2026-03-09T20:21:50.202 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95077/exe 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.205 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.206 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.206 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.206 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.207 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.207 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.207 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.207 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.207 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.207 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.208 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.208 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.208 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.208 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.208 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.208 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.211 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_cls.log 2026-03-09T20:21:50.212 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ cls: /' 2026-03-09T20:21:50.213 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95120/exe 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_cls 2026-03-09T20:21:50.214 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.215 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.215 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.215 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.215 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.216 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.216 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.216 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.216 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.216 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.216 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.216 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.216 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.217 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.217 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.217 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.217 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.217 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.217 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.217 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.217 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.218 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.221 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_cmd 2026-03-09T20:21:50.222 INFO:tasks.workunit.client.0.vm05.stderr:++ echo list 2026-03-09T20:21:50.222 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.224 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_cmd.log 2026-03-09T20:21:50.224 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ cmd: /' 2026-03-09T20:21:50.229 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.229 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.229 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.229 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.229 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.232 INFO:tasks.workunit.client.0.vm05.stdout:test list on pid 95284 2026-03-09T20:21:50.232 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=list 2026-03-09T20:21:50.233 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95284 2026-03-09T20:21:50.233 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test list on pid 95284' 2026-03-09T20:21:50.233 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95284 2026-03-09T20:21:50.233 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T20:21:50.233 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.236 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.238 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.238 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.238 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.238 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.238 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.238 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.238 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.238 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.238 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.240 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.241 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95150/exe ']' 2026-03-09T20:21:50.242 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_list 2>&1 | tee ceph_test_neorados_list.log | sed "s/^/ list: /"' 2026-03-09T20:21:50.242 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s ec_list 2026-03-09T20:21:50.242 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' ec_list' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.283 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.284 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.284 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.287 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.287 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.287 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.287 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.290 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.290 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.290 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.290 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.290 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.297 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.297 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.297 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.298 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95150/exe 2026-03-09T20:21:50.299 INFO:tasks.workunit.client.0.vm05.stderr:++ echo ec_list 2026-03-09T20:21:50.301 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.302 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.304 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.305 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.305 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.305 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stdout:test ec_list on pid 95337 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=ec_list 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95337 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test ec_list on pid 95337' 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95337 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95186/exe ']' 2026-03-09T20:21:50.313 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_handler_error 2026-03-09T20:21:50.315 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_handler_error.log 2026-03-09T20:21:50.315 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ handler_error: /' 2026-03-09T20:21:50.317 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.320 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s misc 2026-03-09T20:21:50.321 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' misc' 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.327 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95186/exe 2026-03-09T20:21:50.328 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.330 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.331 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.331 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.331 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.331 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.331 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.331 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.331 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.331 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.331 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.331 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.331 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.334 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_ec_list 2>&1 | tee ceph_test_neorados_ec_list.log | sed "s/^/ ec_list: /"' 2026-03-09T20:21:50.335 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.335 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.335 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.335 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.335 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.345 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.345 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.345 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.345 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.345 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.345 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.348 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.350 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.350 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.350 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.350 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.350 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.350 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.350 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.351 INFO:tasks.workunit.client.0.vm05.stderr:++ echo misc 2026-03-09T20:21:50.352 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.354 INFO:tasks.workunit.client.0.vm05.stdout:test misc on pid 95385 2026-03-09T20:21:50.354 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=misc 2026-03-09T20:21:50.354 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95385 2026-03-09T20:21:50.354 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test misc on pid 95385' 2026-03-09T20:21:50.354 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95385 2026-03-09T20:21:50.354 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T20:21:50.354 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.357 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.362 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s pool 2026-03-09T20:21:50.362 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' pool' 2026-03-09T20:21:50.363 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_misc 2>&1 | tee ceph_test_neorados_misc.log | sed "s/^/ misc: /"' 2026-03-09T20:21:50.365 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.365 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.365 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.365 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.365 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.365 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.365 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.365 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.365 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.369 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.369 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.369 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.369 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.369 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.369 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.369 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.369 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.369 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.369 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.369 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.369 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95225/exe ']' 2026-03-09T20:21:50.370 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.370 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.372 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.375 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.375 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.375 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.379 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ io: /' 2026-03-09T20:21:50.379 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_io.log 2026-03-09T20:21:50.383 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_io 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ echo pool 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.405 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.406 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.406 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.406 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.406 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.406 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.406 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.406 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.413 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95225/exe 2026-03-09T20:21:50.428 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.447 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.447 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.447 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.447 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.447 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.447 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.447 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.447 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.447 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.447 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.447 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.447 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.450 INFO:tasks.workunit.client.0.vm05.stdout:test pool on pid 95466 2026-03-09T20:21:50.450 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=pool 2026-03-09T20:21:50.450 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95466 2026-03-09T20:21:50.450 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test pool on pid 95466' 2026-03-09T20:21:50.450 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95466 2026-03-09T20:21:50.450 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T20:21:50.450 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.453 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.453 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.453 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.453 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.453 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_ec_io 2026-03-09T20:21:50.453 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_ec_io.log 2026-03-09T20:21:50.455 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ ec_io: /' 2026-03-09T20:21:50.478 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_pool 2>&1 | tee ceph_test_neorados_pool.log | sed "s/^/ pool: /"' 2026-03-09T20:21:50.478 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s read_operations 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.480 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.485 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' read_operations' 2026-03-09T20:21:50.500 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.500 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.500 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.500 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.500 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.500 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.510 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.515 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ echo read_operations 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.517 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.519 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.519 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.519 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.519 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.519 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.519 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.519 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.532 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.536 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.536 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.536 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.536 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.536 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stdout:test read_operations on pid 95513 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=read_operations 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95513 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test read_operations on pid 95513' 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95513 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T20:21:50.538 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.543 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.546 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.548 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_read_operations 2>&1 | tee ceph_test_neorados_read_operations.log | sed "s/^/ read_operations: /"' 2026-03-09T20:21:50.549 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.549 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.549 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.549 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.549 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.555 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.555 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s snapshots 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' snapshots' 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95337/exe ']' 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.556 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.557 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.557 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.557 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95284/exe ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.562 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95337/exe 2026-03-09T20:21:50.572 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ ec_list: /' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ echo snapshots 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.573 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_ec_list.log 2026-03-09T20:21:50.580 INFO:tasks.workunit.client.0.vm05.stdout:test snapshots on pid 95549 2026-03-09T20:21:50.580 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=snapshots 2026-03-09T20:21:50.580 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95549 2026-03-09T20:21:50.580 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test snapshots on pid 95549' 2026-03-09T20:21:50.580 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95549 2026-03-09T20:21:50.580 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T20:21:50.580 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.581 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.588 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95284/exe 2026-03-09T20:21:50.589 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.589 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_ec_list 2026-03-09T20:21:50.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.590 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.590 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.590 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.590 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.590 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.590 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.590 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.590 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.590 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.590 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.590 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ list: /' 2026-03-09T20:21:50.590 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_list.log 2026-03-09T20:21:50.590 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_list 2026-03-09T20:21:50.602 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.602 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_snapshots 2>&1 | tee ceph_test_neorados_snapshots.log | sed "s/^/ snapshots: /"' 2026-03-09T20:21:50.603 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.603 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.603 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.603 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.603 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.603 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.608 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s watch_notify 2026-03-09T20:21:50.608 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' watch_notify' 2026-03-09T20:21:50.613 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.613 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.613 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.613 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.613 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.616 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.617 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.618 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.618 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.618 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.618 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.618 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.618 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.618 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.618 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95385/exe ']' 2026-03-09T20:21:50.628 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.629 INFO:tasks.workunit.client.0.vm05.stderr:++ echo watch_notify 2026-03-09T20:21:50.629 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.636 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=watch_notify 2026-03-09T20:21:50.636 INFO:tasks.workunit.client.0.vm05.stdout:test watch_notify on pid 95597 2026-03-09T20:21:50.636 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95597 2026-03-09T20:21:50.636 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test watch_notify on pid 95597' 2026-03-09T20:21:50.636 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95597 2026-03-09T20:21:50.636 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-09T20:21:50.636 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.641 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.643 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.643 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.643 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.643 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.643 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.649 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s write_operations 2026-03-09T20:21:50.649 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.649 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_watch_notify 2>&1 | tee ceph_test_neorados_watch_notify.log | sed "s/^/ watch_notify: /"' 2026-03-09T20:21:50.649 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' write_operations' 2026-03-09T20:21:50.650 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95385/exe 2026-03-09T20:21:50.650 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.654 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.655 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.655 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.655 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.656 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.656 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.656 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.656 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.656 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.656 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.656 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.656 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.656 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.656 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.656 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.656 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.656 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.657 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.657 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95466/exe ']' 2026-03-09T20:21:50.659 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.659 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.659 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.659 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.659 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[51870]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[51870]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[51870]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1847845512' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[61345]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[61345]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[61345]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:50.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1847845512' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T20:21:50.661 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.661 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.661 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.661 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.661 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.661 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.661 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.661 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.661 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.662 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_misc 2026-03-09T20:21:50.663 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_misc.log 2026-03-09T20:21:50.663 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.664 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ misc: /' 2026-03-09T20:21:50.664 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.664 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.664 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.664 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.664 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.664 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.664 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ echo write_operations 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.666 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.668 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.670 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95513/exe ']' 2026-03-09T20:21:50.671 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95549/exe ']' 2026-03-09T20:21:50.672 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95513/exe 2026-03-09T20:21:50.673 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=write_operations 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stdout:test write_operations on pid 95648 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95648 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test write_operations on pid 95648' 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=95648 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+ ret=0 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94427 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94427 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95466/exe 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.674 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95549/exe 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.675 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.676 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.676 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.676 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.676 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.676 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.676 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.676 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.676 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.676 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.676 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.676 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.676 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_read_operations 2026-03-09T20:21:50.679 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ snapshots: /' 2026-03-09T20:21:50.679 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ read_operations: /' 2026-03-09T20:21:50.680 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_snapshots.log 2026-03-09T20:21:50.680 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_write_operations 2>&1 | tee ceph_test_neorados_write_operations.log | sed "s/^/ write_operations: /"' 2026-03-09T20:21:50.680 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_read_operations.log 2026-03-09T20:21:50.680 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_snapshots 2026-03-09T20:21:50.680 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_pool 2026-03-09T20:21:50.681 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_pool.log 2026-03-09T20:21:50.681 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ pool: /' 2026-03-09T20:21:50.685 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-09T20:21:50.685 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-09T20:21:50.715 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.715 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.715 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.715 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.715 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.715 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.715 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.715 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.715 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.715 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.715 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.715 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.716 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.716 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.716 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.716 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.716 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.716 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.716 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.716 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.721 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.721 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.721 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.721 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.721 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.721 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.721 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.721 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.721 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.740 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.747 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.747 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.747 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.747 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.747 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.747 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.750 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.750 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95597/exe ']' 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.751 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-09T20:21:50.753 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95597/exe 2026-03-09T20:21:50.754 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.755 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.755 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.755 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.755 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.755 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.755 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.755 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.755 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.755 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.755 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.755 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.755 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.759 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-09T20:21:50.763 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-09T20:21:50.763 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-09T20:21:50.763 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.763 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-09T20:21:50.763 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.763 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-09T20:21:50.763 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.765 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_watch_notify 2026-03-09T20:21:50.766 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_watch_notify.log 2026-03-09T20:21:50.766 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ watch_notify: /' 2026-03-09T20:21:50.769 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-09T20:21:50.769 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-09T20:21:50.769 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-09T20:21:50.769 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.769 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-09T20:21:50.769 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.769 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-09T20:21:50.769 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-09T20:21:50.769 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-09T20:21:50.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:50.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:50 vm09 ceph-mon[54524]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:50.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:50.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:50 vm09 ceph-mon[54524]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:50.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:50.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:50 vm09 ceph-mon[54524]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:50.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1847845512' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-09T20:21:50.773 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-09T20:21:50.774 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.774 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-09T20:21:50.774 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.774 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-09T20:21:50.774 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-09T20:21:50.776 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-09T20:21:50.779 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-09T20:21:50.780 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/95648/exe ']' 2026-03-09T20:21:50.780 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/95648/exe 2026-03-09T20:21:50.780 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-09T20:21:50.781 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-09T20:21:50.781 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-09T20:21:50.781 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-09T20:21:50.781 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-09T20:21:50.781 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-09T20:21:50.781 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-09T20:21:50.781 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-09T20:21:50.781 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-09T20:21:50.781 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-09T20:21:50.781 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-09T20:21:50.781 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-09T20:21:50.781 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-09T20:21:50.781 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ write_operations: /' 2026-03-09T20:21:50.782 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_write_operations.log 2026-03-09T20:21:50.783 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_write_operations 2026-03-09T20:21:51.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: pgmap v38: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/932956762' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-94855-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-94822-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/573358478' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-94564-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2908872493' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-94776-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-94413-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/655153138' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-94573-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3147514456' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94876-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2166447669' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-95104-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/993454759' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm05-94281-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: osdmap e62: 8 total, 8 up, 8 in 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1381389895' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-94655-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2253011463' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-94758-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24781 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-94855-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24730 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-94564-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24772 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-94776-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/176010636' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-94310-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2991219534' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm05-94338-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/178258859' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-94350-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/878325866' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm05-94410-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2935698927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1086256194' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm05-94771-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/822339260' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95000-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-94822-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24680 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-94413-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24695 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-94573-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24767 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94876-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24809 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-95104-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24644 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm05-94281-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24713 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-94655-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-95462-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24719 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-94758-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-95462-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-95542-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:51 vm09 ceph-mon[54524]: from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-95542-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:51.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: pgmap v38: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/932956762' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-94855-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-94822-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/573358478' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-94564-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2908872493' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-94776-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-94413-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/655153138' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-94573-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3147514456' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94876-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2166447669' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-95104-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/993454759' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm05-94281-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: osdmap e62: 8 total, 8 up, 8 in 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1381389895' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-94655-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2253011463' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-94758-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24781 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-94855-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24730 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-94564-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24772 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-94776-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/176010636' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-94310-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2991219534' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm05-94338-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/178258859' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-94350-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/878325866' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm05-94410-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2935698927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1086256194' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm05-94771-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/822339260' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95000-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-94822-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24680 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-94413-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24695 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-94573-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24767 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94876-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24809 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-95104-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24644 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm05-94281-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24713 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-94655-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-95462-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24719 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-94758-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-95462-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-95542-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[61345]: from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-95542-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: pgmap v38: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/932956762' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-94855-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-94822-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/573358478' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-94564-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2908872493' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-94776-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-94413-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/655153138' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-94573-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3147514456' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94876-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2166447669' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-95104-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/993454759' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm05-94281-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: osdmap e62: 8 total, 8 up, 8 in 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1381389895' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-94655-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2253011463' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-94758-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24781 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-94855-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24730 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-94564-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24772 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-94776-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/176010636' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-94310-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2991219534' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm05-94338-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/178258859' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-94350-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/878325866' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm05-94410-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2935698927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1086256194' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm05-94771-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/822339260' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95000-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-94822-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24680 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-94413-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24695 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-94573-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24767 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94876-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24809 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-95104-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24644 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm05-94281-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24713 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-94655-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-95462-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24719 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-94758-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-95462-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-95542-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:51 vm05 ceph-mon[51870]: from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-95542-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:52.000 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [==========] Running 12 tests from 1 test suite. 2026-03-09T20:21:52.000 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [----------] Global test environment set-up. 2026-03-09T20:21:52.000 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [----------] 12 tests from AsioRados 2026-03-09T20:21:52.000 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncReadCallback 2026-03-09T20:21:52.000 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncReadCallback (0 ms) 2026-03-09T20:21:52.000 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncReadFuture 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncReadFuture (1 ms) 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncReadYield 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncReadYield (0 ms) 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteCallback 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncWriteCallback (39 ms) 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteFuture 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncWriteFuture (7 ms) 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteYield 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncWriteYield (4 ms) 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationCallback 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationCallback (6 ms) 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationFuture 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationFuture (0 ms) 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationYield 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationYield (2 ms) 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationCallback 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationCallback (3 ms) 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationFuture 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationFuture (2 ms) 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationYield 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationYield (4 ms) 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [----------] 12 tests from AsioRados (68 ms total) 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [----------] Global test environment tear-down 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [==========] 12 tests from 1 test suite ran. (2305 ms total) 2026-03-09T20:21:52.001 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ PASSED ] 12 tests. 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [==========] Running 11 tests from 3 test suites. 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] Global test environment set-up. 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] 7 tests from LibRadosList 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.ListObjects 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.ListObjects (633 ms) 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.ListObjectsZeroInName 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.ListObjectsZeroInName (50 ms) 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.ListObjectsNS 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset foo1,foo2,foo3 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo1 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo2 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo3 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset foo1,foo4,foo5 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo4 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo5 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo1 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset foo6,foo7 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo7 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo6 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset :foo1,:foo2,:foo3,ns1:foo1,ns1:foo4,ns1:foo5,ns2:foo6,ns2:foo7 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns1:foo4 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns1:foo5 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns2:foo7 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns2:foo6 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns1:foo1 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: :foo1 2026-03-09T20:21:52.439 INFO:tasks.workunit.client.0.vm05.stdout: api_list: :foo2 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: :foo3 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.ListObjectsNS (119 ms) 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.ListObjectsStart 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 1 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 10 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 13 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 7 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 14 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 0 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 15 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 11 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 5 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 8 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 6 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 3 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 4 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 12 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 9 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 2 0 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: have 1 expect one of 0,1,10,11,12,13,14,15,2,3,4,5,6,7,8,9 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.ListObjectsStart (73 ms) 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.ListObjectsCursor 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: x cursor=MIN 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=1 cursor=11:02547ec2:::1:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=10 cursor=11:52ea6a34:::10:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=13 cursor=11:566253c9:::13:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=7 cursor=11:5c6b0b28:::7:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=14 cursor=11:62a1935d:::14:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=0 cursor=11:6cac518f:::0:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=15 cursor=11:863748b0:::15:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=11 cursor=11:89d3ae78:::11:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=5 cursor=11:b29083e3:::5:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=8 cursor=11:bd63b0f1:::8:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=6 cursor=11:c4fdafeb:::6:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=3 cursor=11:cfc208b3:::3:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=4 cursor=11:d83876eb:::4:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=12 cursor=11:de5d7c5f:::12:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=9 cursor=11:e960b815:::9:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=2 cursor=11:f905c69b:::2:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: FIRST> seek to MIN oid=1 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=1 cursor=11:02547ec2:::1:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:02547ec2:::1:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:02547ec2:::1:head -> 1 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=10 cursor=11:52ea6a34:::10:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:52ea6a34:::10:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:52ea6a34:::10:head -> 10 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=13 cursor=11:566253c9:::13:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:566253c9:::13:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:566253c9:::13:head -> 13 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=7 cursor=11:5c6b0b28:::7:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:5c6b0b28:::7:head 2026-03-09T20:21:52.440 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:5c6b0b28:::7:head -> 7 2026-03-09T20:21:52.472 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=14 cu api_c_read_operations: Running main() from gmock_main.cc 2026-03-09T20:21:52.472 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [==========] Running 17 tests from 1 test suite. 2026-03-09T20:21:52.472 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [----------] Global test environment set-up. 2026-03-09T20:21:52.472 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [----------] 17 tests from CReadOpsTest 2026-03-09T20:21:52.472 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.NewDelete 2026-03-09T20:21:52.472 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.NewDelete (0 ms) 2026-03-09T20:21:52.472 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.SetOpFlags 2026-03-09T20:21:52.472 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.SetOpFlags (498 ms) 2026-03-09T20:21:52.472 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.AssertExists 2026-03-09T20:21:52.472 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.AssertExists (92 ms) 2026-03-09T20:21:52.472 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.AssertVersion 2026-03-09T20:21:52.472 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.AssertVersion (24 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.CmpXattr 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.CmpXattr (61 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Read 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Read (9 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Checksum 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Checksum (9 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.RWOrderedRead 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.RWOrderedRead (4 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.ShortRead 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.ShortRead (12 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Exec 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Exec (6 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.ExecUserBuf 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.ExecUserBuf (4 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Stat 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Stat (7 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Stat2 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Stat2 (8 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Omap 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Omap (21 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.OmapNuls 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.OmapNuls (14 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.GetXattrs 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.GetXattrs (10 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.CmpExt 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.CmpExt (4 ms) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [----------] 17 tests from CReadOpsTest (784 ms total) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [----------] Global test environment tear-down 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [==========] 17 tests from 1 test suite ran. (2290 ms total) 2026-03-09T20:21:52.473 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ PASSED ] 17 tests. 2026-03-09T20:21:52.489 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: Running main() from gmock_main.cc 2026-03-09T20:21:52.489 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [==========] Running 3 tests from 1 test suite. 2026-03-09T20:21:52.489 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [----------] Global test environment set-up. 2026-03-09T20:21:52.490 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [----------] 3 tests from LibRadosCmd 2026-03-09T20:21:52.490 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.MonDescribePP 2026-03-09T20:21:52.490 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ OK ] LibRadosCmd.MonDescribePP (86 ms) 2026-03-09T20:21:52.490 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.OSDCmdPP 2026-03-09T20:21:52.490 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ OK ] LibRadosCmd.OSDCmdPP (37 ms) 2026-03-09T20:21:52.490 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.PGCmdPP 2026-03-09T20:21:52.490 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ OK ] LibRadosCmd.PGCmdPP (2287 ms) 2026-03-09T20:21:52.490 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [----------] 3 tests from LibRadosCmd (2410 ms total) 2026-03-09T20:21:52.490 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: 2026-03-09T20:21:52.490 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [----------] Global test environment tear-down 2026-03-09T20:21:52.490 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [==========] 3 tests from 1 test suite ran. (2410 ms total) 2026-03-09T20:21:52.490 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ PASSED ] 3 tests. 2026-03-09T20:21:52.609 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: Running main() from gmock_main.cc 2026-03-09T20:21:52.609 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [==========] Running 4 tests from 1 test suite. 2026-03-09T20:21:52.609 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [----------] Global test environment set-up. 2026-03-09T20:21:52.609 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [----------] 4 tests from LibRadosCmd 2026-03-09T20:21:52.609 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ RUN ] LibRadosCmd.MonDescribe 2026-03-09T20:21:52.609 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ OK ] LibRadosCmd.MonDescribe (45 ms) 2026-03-09T20:21:52.609 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ RUN ] LibRadosCmd.OSDCmd 2026-03-09T20:21:52.609 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ OK ] LibRadosCmd.OSDCmd (35 ms) 2026-03-09T20:21:52.609 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ RUN ] LibRadosCmd.PGCmd 2026-03-09T20:21:52.609 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ OK ] LibRadosCmd.PGCmd (2472 ms) 2026-03-09T20:21:52.609 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ RUN ] LibRadosCmd.WatchLog 2026-03-09T20:21:52.609 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427017+0000 mon.a [INF] from='client.24781 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-94855-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.610 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427061+0000 mon.a [INF] from='client.24730 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-94564-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.610 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427087+0000 mon.a [INF] from='client.24772 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-94776-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.610 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427116+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/176010636' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-94310-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.610 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427182+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/2991219534' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm05-94338-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.610 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427204+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/178258859' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-94350-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.610 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427224+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/878325866' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm05-94410-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.610 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427244+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.610 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427310+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/2935698927' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.610 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427500+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/1086256194' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm05-94771-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.610 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427523+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/822339260' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95000-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.610 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427543+0000 mon.a [INF] from='client.24680 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-94413-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24781 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-94855-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24730 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-94564-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24772 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-94776-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/176010636' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-94310-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2991219534' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm05-94338-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/178258859' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-94350-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/878325866' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm05-94410-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2935698927' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1086256194' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm05-94771-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/822339260' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95000-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24680 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-94413-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24695 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-94573-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24767 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94876-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24809 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-95104-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24644 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm05-94281-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24713 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-94655-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24719 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-94758-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-95462-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24857 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-95542-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm05-95462-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm05-95542-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: osdmap e63: 8 total, 8 up, 8 in 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm05-95462-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm05-95542-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3422680420' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:52.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24781 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-94855-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24730 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-94564-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24772 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-94776-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/176010636' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-94310-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2991219534' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm05-94338-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/178258859' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-94350-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/878325866' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm05-94410-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2935698927' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1086256194' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm05-94771-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/822339260' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95000-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24680 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-94413-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24695 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-94573-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24767 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94876-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24809 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-95104-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24644 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm05-94281-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24713 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-94655-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24719 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-94758-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-95462-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24857 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-95542-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm05-95462-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm05-95542-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: osdmap e63: 8 total, 8 up, 8 in 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm05-95462-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm05-95542-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3422680420' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24781 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-94855-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24730 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-94564-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24772 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-94776-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/176010636' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-94310-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2991219534' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm05-94338-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/178258859' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-94350-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/878325866' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm05-94410-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2935698927' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1086256194' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm05-94771-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/822339260' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95000-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24680 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-94413-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24695 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-94573-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24767 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94876-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24809 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-95104-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24644 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm05-94281-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24713 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-94655-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24719 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-94758-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-95462-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24857 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-95542-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm05-95462-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm05-95542-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: osdmap e63: 8 total, 8 up, 8 in 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm05-95462-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm05-95542-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:52.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3422680420' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T2rsor=11:62a1935d:::14:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:62a1935d:::14:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:62a1935d:::14:head -> 14 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=0 cursor=11:6cac518f:::0:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:6cac518f:::0:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:6cac518f:::0:head -> 0 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=15 cursor=11:863748b0:::15:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:863748b0:::15:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:863748b0:::15:head -> 15 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=11 cursor=11:89d3ae78:::11:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:89d3ae78:::11:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:89d3ae78:::11:head -> 11 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=5 cursor=11:b29083e3:::5:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:b29083e3:::5:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:b29083e3:::5:head -> 5 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=8 cursor=11:bd63b0f1:::8:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:bd63b0f1:::8:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:bd63b0f1:::8:head -> 8 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=6 cursor=11:c4fdafeb:::6:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:c4fdafeb:::6:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:c4fdafeb:::6:head -> 6 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=3 cursor=11:cfc208b3:::3:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:cfc208b3:::3:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:cfc208b3:::3:head -> 3 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=4 cursor=11:d83876eb:::4:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:d83876eb:::4:head 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:d83876eb:::4:head -> 4 2026-03-09T20:21:52.993 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=12 cursor=11:de5d7c5f:::12:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:de5d7c5f:::12:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:de5d7c5f:::12:head -> 12 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=9 cursor=11:e960b815:::9:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:e960b815:::9:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:e960b815:::9:head -> 9 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=2 cursor=11:f905c69b:::2:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:f905c69b:::2:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:f905c69b:::2:head -> 2 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:566253c9:::13:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:566253c9:::13:head expected=11:566253c9:::13:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:566253c9:::13:head -> 13 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=13 expected=13 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:62a1935d:::14:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:62a1935d:::14:head expected=11:62a1935d:::14:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:62a1935d:::14:head -> 14 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=14 expected=14 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:52ea6a34:::10:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:52ea6a34:::10:head expected=11:52ea6a34:::10:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:52ea6a34:::10:head -> 10 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=10 expected=10 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:5c6b0b28:::7:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:5c6b0b28:::7:head expected=11:5c6b0b28:::7:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:5c6b0b28:::7:head -> 7 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=7 expected=7 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:de5d7c5f:::12:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:de5d7c5f:::12:head expected=11:de5d7c5f:::12:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:de5d7c5f:::12:head -> 12 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=12 expected=12 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:f905c69b:::2:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:f905c69b:::2:head expected=11:f905c69b:::2:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:f905c69b:::2:head -> 2 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=2 expected=2 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:e960b815:::9:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:e960b815:::9:head expected=11:e960b815:::9:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:e960b815:::9:head -> 9 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=9 expected=9 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:d83876eb:::4:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:d83876eb:::4:head expected=11:d83876eb:::4:head 2026-03-09T20:21:52.994 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:d83876eb:::4:head -> 4 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_l0:21:51.427564+0000 mon.a [INF] from='client.24695 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-94573-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427585+0000 mon.a [INF] from='client.24767 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94876-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427615+0000 mon.a [INF] from='client.24809 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-95104-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427635+0000 mon.a [INF] from='client.24644 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm05-94281-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427657+0000 mon.a [INF] from='client.24713 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-94655-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427677+0000 mon.a [INF] from='client.24719 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-94758-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427702+0000 mon.a [INF] from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-95462-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.427725+0000 mon.a [INF] from='client.24857 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-95542-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.434900+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm05-95462-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.473860+0000 mon.c [INF] from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm05-95542-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.491556+0000 mon.a [INF] from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm05-95462-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.492519+0000 mon.a [INF] from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm05-95542-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:51.508683+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/3422680420' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.428203+0000 mon.a [WRN] Health check failed: 16 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.428227+0000 mon.a [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.445606+0000 mon.a [INF] from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-94822-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]': finished 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.445648+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/3422680420' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:53.439 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.510951+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.513810+0000 mon.a [INF] from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.518999+0000 mon.c [INF] from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.520124+0000 mon.a [INF] from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.564895+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.566716+0000 mon.a [INF] from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.567722+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-94350-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.568653+0000 mon.a [INF] from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-94350-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.596702+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.602503+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.602635+0000 client.admin [INF] onexx 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.611174+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.611717+0000 mon.a [INF] from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.611864+0000 mon.a [INF] from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T20:21:53.440 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.612233+0000 mon.a [INF] from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.470 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.615881+0000 mon.b [INF] from=' cls: Running main() from gmock_main.cc 2026-03-09T20:21:53.470 INFO:tasks.workunit.client.0.vm05.stdout: cls: [==========] Running 1 test from 1 test suite. 2026-03-09T20:21:53.470 INFO:tasks.workunit.client.0.vm05.stdout: cls: [----------] Global test environment set-up. 2026-03-09T20:21:53.470 INFO:tasks.workunit.client.0.vm05.stdout: cls: [----------] 1 test from NeoRadosCls 2026-03-09T20:21:53.470 INFO:tasks.workunit.client.0.vm05.stdout: cls: [ RUN ] NeoRadosCls.DNE 2026-03-09T20:21:53.470 INFO:tasks.workunit.client.0.vm05.stdout: cls: [ OK ] NeoRadosCls.DNE (3167 ms) 2026-03-09T20:21:53.470 INFO:tasks.workunit.client.0.vm05.stdout: cls: [----------] 1 test from NeoRadosCls (3168 ms total) 2026-03-09T20:21:53.470 INFO:tasks.workunit.client.0.vm05.stdout: cls: 2026-03-09T20:21:53.470 INFO:tasks.workunit.client.0.vm05.stdout: cls: [----------] Global test environment tear-down 2026-03-09T20:21:53.470 INFO:tasks.workunit.client.0.vm05.stdout: cls: [==========] 1 test from 1 test suite ran. (3168 ms total) 2026-03-09T20:21:53.470 INFO:tasks.workunit.client.0.vm05.stdout: cls: [ PASSED ] 1 test. 2026-03-09T20:21:53.486 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: Running main() from gmock_main.cc 2026-03-09T20:21:53.486 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [==========] Running 1 test from 1 test suite. 2026-03-09T20:21:53.486 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [----------] Global test environment set-up. 2026-03-09T20:21:53.486 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [----------] 1 test from neocls_handler_error 2026-03-09T20:21:53.486 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [ RUN ] neocls_handler_error.test_handler_error 2026-03-09T20:21:53.486 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [ OK ] neocls_handler_error.test_handler_error (2944 ms) 2026-03-09T20:21:53.486 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [----------] 1 test from neocls_handler_error (2944 ms total) 2026-03-09T20:21:53.486 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: 2026-03-09T20:21:53.486 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [----------] Global test environment tear-down 2026-03-09T20:21:53.486 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [==========] 1 test from 1 test suite ran. (2944 ms total) 2026-03-09T20:21:53.486 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [ PASSED ] 1 test. 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: pgmap v41: 1220 pgs: 1088 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 744 B/s rd, 0 op/s; 68 B/s, 3 objects/s recovering 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: Health check failed: 16 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-94822-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]': finished 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3422680420' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: osdmap e64: 8 total, 8 up, 8 in 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-94350-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-94350-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: onexx 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-94776-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-94771-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-94776-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-94771-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:53.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: pgmap v41: 1220 pgs: 1088 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 744 B/s rd, 0 op/s; 68 B/s, 3 objects/s recovering 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: Health check failed: 16 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-94822-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]': finished 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3422680420' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: osdmap e64: 8 total, 8 up, 8 in 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-94350-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-94350-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: onexx 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-94776-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-94771-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-94776-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-94771-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: pgmap v41: 1220 pgs: 1088 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 744 B/s rd, 0 op/s; 68 B/s, 3 objects/s recovering 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: Health check failed: 16 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-94822-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]': finished 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3422680420' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: osdmap e64: 8 total, 8 up, 8 in 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-94350-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-94350-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: onexx 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-94776-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-94771-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-94776-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-94771-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[95356]: starting. 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[95356]: creating pool ceph_test_rados_list_parallel.vm05-95080 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[95356]: created object 0... 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[95356]: created object 25... 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[95356]: created object 49... 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[95356]: finishing. 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[95356]: shutting down. 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_2_[95357]: starting. 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_2_[95357]: listing objects. 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_2_[95357]: listed object 0... 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_2_[95357]: listed object 25... 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_2_[95357]: saw 50 objects 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_2_[95357]: shutting down. 2026-03-09T20:21:53.970 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[96004]: starting. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[96004]: creating pool ceph_test_rados_list_parallel.vm05-95080 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[96004]: created object 0... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[96004]: created object 25... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[96004]: created object 49... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[96004]: finishing. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[96004]: shutting down. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_4_[96005]: starting. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_4_[96005]: listing objects. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_4_[96005]: listed object 0... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_4_[96005]: listed object 25... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_4_[96005]: saw 45 objects 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_4_[96005]: shutting down. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_5_[96006]: starting. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_5_[96006]: removed 25 objects... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_5_[96006]: removed half of the objects 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_5_[96006]: removed 50 objects... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_5_[96006]: removed 50 objects 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_5_[96006]: shutting down. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[96052]: starting. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[96052]: creating pool ceph_test_rados_list_parallel.vm05-95080 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[96052]: created object 0... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[96052]: created object 25... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[96052]: created object 49... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[96052]: finishing. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[96052]: shutting down. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[96053]: starting. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[96053]: listing objects. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[96053]: listed object 0... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[96053]: listed object 25... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[96053]: listed object 50... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[96053]: saw 53 objects 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[96053]: shutting down. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_8_[96054]: starting. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_8_[96054]: added 25 objects... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_8_[96054]: added half of the objects 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_8_[96054]: added 50 objects... 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_8_[96054]: added 50 objects 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_8_[96054]: shutting down. 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:53.971 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.260 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_9_[96248]: starting. 2026-03-09T20:21:54.260 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_9_[96248]: creating pool ceph_test_rados_list_parallel.vm05-95080 2026-03-09T20:21:54.260 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_9_[96248]: created object 0... 2026-03-09T20:21:54.260 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_9_[96248]: created object 25... 2026-03-09T20:21:54.260 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_9_[96248]: created object 49... 2026-03-09T20:21:54.260 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_9_[96248]: finishing. 2026-03-09T20:21:54.260 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_9_[96248]: shutting down. 2026-03-09T20:21:54.260 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.260 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.260 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[96249]: starting. 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[96249]: listing objects. 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[96249]: listed object 0... 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[96249]: listed object 25... 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[96249]: listed object 50... 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[96249]: listed object 75... 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[96249]: saw 98 objects 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[96249]: shutting down. 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_12_[96251]: starting. 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_12_[96251]: added 25 objects... 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_12_[96251]: added half of the objects 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_12_[96251]: added 50 objects... 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_12_[96251]: added 50 objects 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_12_[96251]: shutting down. 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_11_[96250]: starting. 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_11_[96250]: added 25 objects... 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_11_[96250]: added half of the objects 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_11_[96250]: added 50 objects... 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_11_[96250]: added 50 objects 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_11_[96250]: shutting down. 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_13_[96252]: starting. 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_13_[96252]: removed 25 objects... 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_13_[96252]: removed half of the objects 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_13_[96252]: removed 50 objects... 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_13_[96252]: removed 50 objects 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_13_[96252]: shutting down. 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[96328]: starting. 2026-03-09T20:21:54.261 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[96328]: creating pool ceph_test_rados_list_parallel.vm05-95080 2026-03-09T20:21:54.262 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[96328]: created object 0... 2026-03-09T20:21:54.262 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[96328]: created object 25... 2026-03-09T20:21:54.262 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[96328]: created object 49... 2026-03-09T20:21:54.262 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[96328]: finishing. 2026-03-09T20:21:54.262 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[96328]: shutting down. 2026-03-09T20:21:54.262 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.262 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.262 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.262 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.262 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[96329]: starting. 2026-03-09T20:21:54.262 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[96329]: listing objects. 2026-03-09T20:21:54.262 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[96329]: listed object 0... 2026-03-09T20:21:54.262 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[96329]: listed object 25... 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[96329]: listed object 50... 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[96329]: listed object 75... 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[96329]: listed object 100... 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[96329]: listed object 125... 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[96329]: saw 150 objects 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[96329]: shutting down. 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_16_[96330]: starting. 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_16_[96330]: added 25 objects... 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_16_[96330]: added half of the objects 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_16_[96330]: added 50 objects... 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_16_[96330]: added 50 objects 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_16_[96330]: shutting down. 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-09T20:21:54.561 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******* SUCCESS ********** 2026-03-09T20:21:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.25012 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T20:21:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T20:21:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: twoxx 2026-03-09T20:21:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm05-95462-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-95462-1"}]': finished 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.24857 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm05-95542-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-95542-1"}]': finished 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.24967 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-94350-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.25021 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-94776-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.25006 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-94771-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-94771-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-94350-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-94776-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1062611913' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-94281-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2247154352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/708443203' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: osdmap e65: 8 total, 8 up, 8 in 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-94771-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-94350-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-94776-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.25009 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-94281-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.25024 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.25018 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.25012 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: twoxx 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm05-95462-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-95462-1"}]': finished 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.24857 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm05-95542-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-95542-1"}]': finished 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.24967 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-94350-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.25021 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-94776-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.25006 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-94771-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-94771-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-94350-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-94776-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1062611913' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-94281-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2247154352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/708443203' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: osdmap e65: 8 total, 8 up, 8 in 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-94771-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-94350-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-94776-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.25009 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-94281-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.25024 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.25018 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.25012 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: twoxx 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm05-95462-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-95462-1"}]': finished 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.24857 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm05-95542-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-95542-1"}]': finished 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.24967 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-94350-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.25021 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-94776-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.25006 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-94771-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-94771-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-94350-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-94776-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1062611913' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-94281-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2247154352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/708443203' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: osdmap e65: 8 total, 8 up, 8 in 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-94771-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-94350-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-94776-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.25009 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-94281-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.25024 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.25018 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:54.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: pgmap v44: 796 pgs: 664 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 116 B/s, 5 objects/s recovering 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.25012 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.25009 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-94281-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.25024 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.25018 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2614442900' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: osdmap e66: 8 total, 8 up, 8 in 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.25078 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:55.524 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-94410-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:55.524 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:55.524 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-94410-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:55.524 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-94564-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:55.524 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-94564-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:55.524 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:55.524 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1197467876' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T20:21:55.524 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:21:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: pgmap v44: 796 pgs: 664 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 116 B/s, 5 objects/s recovering 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.25012 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.25009 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-94281-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.25024 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.25018 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2614442900' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: osdmap e66: 8 total, 8 up, 8 in 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.25078 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-94410-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-94410-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-94564-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-94564-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1197467876' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: pgmap v44: 796 pgs: 664 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 116 B/s, 5 objects/s recovering 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.25012 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.25009 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-94281-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.25024 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.25018 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2614442900' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: osdmap e66: 8 total, 8 up, 8 in 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.25078 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-94410-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-94410-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-94564-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-94564-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:55.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1197467876' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout:client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.625599+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.631580+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.633743+0000 mon.a [INF] from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.635576+0000 mon.a [INF] from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.636146+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.640445+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.656388+0000 mon.a [INF] from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.659678+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-94776-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.681613+0000 mon.a [INF] from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.681796+0000 mon.a [INF] from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.684208+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-94771-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.721841+0000 mon.a [INF] from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-94776-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:52.722013+0000 mon.a [INF] from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-94771-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.448335+0000 mon.a [INF] from='client.25012 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.460382+0000 mon.a [INF] from='client.25009 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-94281-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.460413+0000 mon.a [INF] from='client.25024 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.460427+0000 mon.a [INF] from='client.25018 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.483209+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.483873+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/2614442900' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.546214+0000 mon.c [INF] from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.546457+0000 mon.c [INF] from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.574161+0000 mon.a [INF] from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.574346+0000 mon.a [INF] from='client.25078 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.628209+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.634366+0000 mon.a [INF] from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.634639+0000 mon.a [INF] from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.634691+0000 mon.a [INF] from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.639643+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.645840+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.648853+0000 mon.a [INF] from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.648952+0000 mon.a [INF] from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:56.452 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.663302+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.663713+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-94410-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.690225+0000 mon.a [INF] from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.690384+0000 mon.a [INF] from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-94410-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.695036+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-94564-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:54.698254+0000 mon.a [INF] from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-94564-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.472092+0000 mon.a [INF] from='client.25006 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-94771-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-94771-7"}]': finished 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.472172+0000 mon.a [INF] from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.472202+0000 mon.a [INF] from='client.24967 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-94350-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-94350-16"}]': finished 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.472226+0000 mon.a [INF] from='client.25021 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-94776-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-94776-7"}]': finished 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.472263+0000 mon.a [INF] from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-95462-1"}]': finished 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.472288+0000 mon.a [INF] from='client.25078 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.472316+0000 mon.a [INF] from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]': finished 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.472338+0000 mon.a [INF] from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.472364+0000 mon.a [INF] from='client.25183 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-94410-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.472485+0000 mon.a [INF] from='client.25189 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-94564-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:57.334 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.496044+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:57.335 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.500732+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-94410-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:57.335 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.500850+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-94564-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:57.335 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.506327+0000 mon.a [INF] from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:57.335 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.506567+0000 mon.a [INF] from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-94410-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:57.335 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.506862+0000 mon.a [INF] from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-94564-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:57.335 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.508473+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/708443203' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:57.335 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.523178+0000 mon.a [INF] from='client.25018 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:57.335 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.527011+0000 mon.c [INF] from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:57.335 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.541590+0000 mon.c [INF] from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:57.335 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.541934+0000 mon.c [INF] from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:57.335 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.550169+0000 mon.a [INF] from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:57.335 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.551092+0000 mon.a [INF] from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:57.335 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:55.551190+0000 mon.a [INF] from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:56.452776+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log delete_pools_parallel: process_1_[95321]: starting. 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_1_[95321]: creating pool ceph_test_rados_delete_pools_parallel.vm05-95207 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_1_[95321]: created object 0... 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_1_[95321]: created object 25... 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_1_[95321]: created object 49... 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_1_[95321]: finishing. 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_1_[95321]: shutting down. 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_2_[95322]: starting. 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_2_[95322]: deleting pool ceph_test_rados_delete_pools_parallel.vm05-95207 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_2_[95322]: shutting down. 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: ******************************* 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[96118]: starting. 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[96118]: creating pool ceph_test_rados_delete_pools_parallel.vm05-95207 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[96118]: created object 0... 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[96118]: created object 25... 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[96118]: created object 49... 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[96118]: finishing. 2026-03-09T20:21:57.396 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[96118]: shutting down. 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: ******************************* 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_5_[96120]: starting. 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_5_[96120]: listing objects. 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_5_[96120]: listed object 0... 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_5_[96120]: listed object 25... 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_5_[96120]: saw 50 objects 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_5_[96120]: shutting down. 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: ******************************* 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_4_[96119]: starting. 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_4_[96119]: deleting pool ceph_test_rados_delete_pools_parallel.vm05-95207 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_4_[96119]: shutting down. 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: ******************************* 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: ******************************* 2026-03-09T20:21:57.397 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: ******* SUCCESS ********** 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[95309]: starting. 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[95309]: creating pool ceph_test_rados_open_pools_parallel.vm05-95180 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[95309]: created object 0... 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[95309]: created object 25... 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[95309]: created object 49... 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[95309]: finishing. 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[95309]: shutting down. 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_2_[95310]: starting. 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_2_[95310]: rados_pool_create. 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_2_[95310]: rados_ioctx_create. 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_2_[95310]: shutting down. 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: ******************************* 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[96122]: starting. 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[96122]: creating pool ceph_test_rados_open_pools_parallel.vm05-95180 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[96122]: created object 0... 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[96122]: created object 25... 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[96122]: created object 49... 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[96122]: finishing. 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[96122]: shutting down. 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: ******************************* 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_4_[96123]: starting. 2026-03-09T20:21:57.399 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_4_[96123]: rados_pool_create. 2026-03-09T20:21:57.400 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_4_[96123]: rados_ioctx_create. 2026-03-09T20:21:57.400 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_4_[96123]: shutting down. 2026-03-09T20:21:57.400 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: ******************************* 2026-03-09T20:21:57.400 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: ******************************* 2026-03-09T20:21:57.400 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: ******* SUCCESS ********** 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.25006 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-94771-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-94771-7"}]': finished 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.24967 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-94350-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-94350-16"}]': finished 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.25021 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-94776-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-94776-7"}]': finished 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-95462-1"}]': finished 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.25078 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]': finished 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.25183 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-94410-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.25189 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-94564-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: osdmap e67: 8 total, 8 up, 8 in 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-94410-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-94564-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-94410-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-94564-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/708443203' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.25018 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: threexx 2026-03-09T20:21:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[61345]: from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.25006 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-94771-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-94771-7"}]': finished 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.24967 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-94350-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-94350-16"}]': finished 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.25021 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-94776-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-94776-7"}]': finished 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-95462-1"}]': finished 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.25078 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]': finished 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.25183 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-94410-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.25189 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-94564-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: osdmap e67: 8 total, 8 up, 8 in 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-94410-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-94564-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-94410-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-94564-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/708443203' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.25018 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: threexx 2026-03-09T20:21:57.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:57 vm05 ceph-mon[51870]: from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T20:21:57.715 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: Running main() from gmock_main.cc 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [==========] Running 39 tests from 2 test suites. 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [----------] Global test environment set-up. 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [----------] 21 tests from LibRadosIoPP 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: seed 94338 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.TooBigPP 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.TooBigPP (0 ms) 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.SimpleWritePP 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.SimpleWritePP (562 ms) 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.ReadOpPP 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.ReadOpPP (10 ms) 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.SparseReadOpPP 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.SparseReadOpPP (5 ms) 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RoundTripPP 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.RoundTripPP (16 ms) 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RoundTripPP2 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.RoundTripPP2 (5 ms) 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.Checksum 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.Checksum (5 ms) 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.ReadIntoBufferlist 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.ReadIntoBufferlist (13 ms) 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.OverlappingWriteRoundTripPP 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.OverlappingWriteRoundTripPP (8 ms) 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.WriteFullRoundTripPP 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.WriteFullRoundTripPP (14 ms) 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.WriteFullRoundTripPP2 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.WriteFullRoundTripPP2 (3 ms) 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.AppendRoundTripPP 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.AppendRoundTripPP (12 ms) 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.TruncTestPP 2026-03-09T20:21:57.716 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.TruncTestPP (3 ms) 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RemoveTestPP 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.RemoveTestPP (5 ms) 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.XattrsRoundTripPP 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.XattrsRoundTripPP (13 ms) 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RmXattrPP 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.RmXattrPP (31 ms) 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.XattrListPP 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.XattrListPP (10 ms) 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CrcZeroWrite 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.CrcZeroWrite (10 ms) 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtPP 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtPP (6 ms) 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtDNEPP 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtDNEPP (2 ms) 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtMismatchPP 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtMismatchPP (4 ms) 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [----------] 21 tests from LibRadosIoPP (737 ms total) 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [----------] 18 tests from LibRadosIoECPP 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.SimpleWritePP 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.SimpleWritePP (2176 ms) 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.ReadOpPP 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.ReadOpPP (27 ms) 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.SparseReadOpPP 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.SparseReadOpPP (4 ms) 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RoundTripPP 2026-03-09T20:21:57.717 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RoundTripPP (3 ms) 2026-03-09T20:21:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:21:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.25006 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-94771-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-94771-7"}]': finished 2026-03-09T20:21:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:21:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.24967 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-94350-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-94350-16"}]': finished 2026-03-09T20:21:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.25021 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-94776-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-94776-7"}]': finished 2026-03-09T20:21:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-95462-1"}]': finished 2026-03-09T20:21:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.25078 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-94822-1"}]': finished 2026-03-09T20:21:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.25183 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-94410-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.25189 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-94564-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: osdmap e67: 8 total, 8 up, 8 in 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2753084634' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-94410-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-94564-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.24901 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-95462-1"}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-94410-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-94564-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/708443203' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.25018 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/214214830' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.24731 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: threexx 2026-03-09T20:21:57.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:57 vm09 ceph-mon[54524]: from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN cmd: Running main() from gmock_main.cc 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [==========] Running 3 tests from 1 test suite. 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [----------] Global test environment set-up. 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [----------] 3 tests from NeoRadosCmd 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ RUN ] NeoRadosCmd.MonDescribe 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ OK ] NeoRadosCmd.MonDescribe (2163 ms) 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ RUN ] NeoRadosCmd.OSDCmd 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ OK ] NeoRadosCmd.OSDCmd (1997 ms) 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ RUN ] NeoRadosCmd.PGCmd 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ OK ] NeoRadosCmd.PGCmd (3906 ms) 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [----------] 3 tests from NeoRadosCmd (8066 ms total) 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [----------] Global test environment tear-down 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [==========] 3 tests from 1 test suite ran. (8075 ms total) 2026-03-09T20:21:58.411 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ PASSED ] 3 tests. 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: pgmap v47: 900 pgs: 224 creating+peering, 192 unknown, 484 active+clean; 459 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 23 KiB/s wr, 505 op/s 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.25012 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: fourxx 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-95462-1"}]': finished 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.25018 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]': finished 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.24857 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-95542-1"}]': finished 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]': finished 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/708443203' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-94310-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: osdmap e68: 8 total, 8 up, 8 in 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.25018 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-94310-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2211110925' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3305536580' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-94281-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.25330 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.25327 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-94281-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1540944039' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1767113807' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-95462-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.25351 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-95462-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-4", "overlaypool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-4", "overlaypool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-94338-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-94338-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: 16.9 deep-scrub starts 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: 16.9 deep-scrub ok 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: 16.3 deep-scrub starts 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: 16.3 deep-scrub ok 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: pgmap v47: 900 pgs: 224 creating+peering, 192 unknown, 484 active+clean; 459 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 23 KiB/s wr, 505 op/s 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.25012 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: fourxx 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-95462-1"}]': finished 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.25018 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]': finished 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.24857 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-95542-1"}]': finished 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]': finished 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/708443203' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-94310-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: osdmap e68: 8 total, 8 up, 8 in 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.25018 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-94310-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2211110925' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3305536580' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-94281-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.25330 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.25327 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-94281-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1540944039' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1767113807' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-95462-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.25351 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-95462-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-4", "overlaypool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-4", "overlaypool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T20:21:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-94338-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-94338-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: 16.9 deep-scrub starts 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: 16.9 deep-scrub ok 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: 16.3 deep-scrub starts 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: 16.3 deep-scrub ok 2026-03-09T20:21:58.636 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:58.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: pgmap v47: 900 pgs: 224 creating+peering, 192 unknown, 484 active+clean; 459 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 23 KiB/s wr, 505 op/s 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.25012 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3936575652' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: fourxx 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.24901 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-95462-1"}]': finished 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.25018 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-94310-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.24731 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-94822-1"}]': finished 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.24857 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-95542-1"}]': finished 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]': finished 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/708443203' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-94310-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: osdmap e68: 8 total, 8 up, 8 in 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.25018 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-94310-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2211110925' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3305536580' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-94281-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.25330 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.25327 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-94281-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1540944039' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1767113807' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-95462-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.25351 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-95462-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/266869583' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-4", "overlaypool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.24857 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-95542-1"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-4", "overlaypool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-94338-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-94338-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: 16.9 deep-scrub starts 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: 16.9 deep-scrub ok 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: 16.3 deep-scrub starts 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: 16.3 deep-scrub ok 2026-03-09T20:21:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:58.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:21:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:21:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:21:59.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: 16.4 deep-scrub starts 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: 16.4 deep-scrub ok 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: 16.1 deep-scrub starts 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: 16.1 deep-scrub ok 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: 16.0 deep-scrub starts 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: 16.0 deep-scrub ok 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: 16.2 deep-scrub starts 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: 16.2 deep-scrub ok 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: pgmap v49: 804 pgs: 32 creating+peering, 288 unknown, 484 active+clean; 459 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 19 KiB/s wr, 428 op/s 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: 16.6 deep-scrub starts 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: 16.6 deep-scrub ok 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.25012 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.25183 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-94410-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-94410-10"}]': finished 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.25189 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-94564-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-94564-10"}]': finished 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.25018 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-94310-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.25330 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.25327 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-94281-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.25351 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-95462-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.24857 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-95542-1"}]': finished 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-4", "overlaypool": "test-rados-api-vm05-94573-4-cache"}]': finished 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-94338-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: osdmap e69: 8 total, 8 up, 8 in 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-95542-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1767113807' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm05-95462-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3600689769' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.25351 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm05-95462-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.25366 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: 16.4 deep-scrub starts 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: 16.4 deep-scrub ok 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: 16.1 deep-scrub starts 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: 16.1 deep-scrub ok 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: 16.0 deep-scrub starts 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: 16.0 deep-scrub ok 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: 16.2 deep-scrub starts 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: 16.2 deep-scrub ok 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: pgmap v49: 804 pgs: 32 creating+peering, 288 unknown, 484 active+clean; 459 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 19 KiB/s wr, 428 op/s 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: 16.6 deep-scrub starts 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: 16.6 deep-scrub ok 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.25012 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.25183 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-94410-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-94410-10"}]': finished 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.25189 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-94564-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-94564-10"}]': finished 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.25018 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-94310-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.25330 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.25327 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-94281-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.25351 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-95462-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.24857 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-95542-1"}]': finished 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-4", "overlaypool": "test-rados-api-vm05-94573-4-cache"}]': finished 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-94338-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: osdmap e69: 8 total, 8 up, 8 in 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-95542-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1767113807' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm05-95462-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3600689769' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.25351 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm05-95462-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.25366 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:59.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:21:59 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:21:59.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: 16.4 deep-scrub starts 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: 16.4 deep-scrub ok 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: 16.1 deep-scrub starts 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: 16.1 deep-scrub ok 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: 16.0 deep-scrub starts 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: 16.0 deep-scrub ok 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: 16.2 deep-scrub starts 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: 16.2 deep-scrub ok 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: pgmap v49: 804 pgs: 32 creating+peering, 288 unknown, 484 active+clean; 459 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 19 KiB/s wr, 428 op/s 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: 16.6 deep-scrub starts 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: 16.6 deep-scrub ok 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.25012 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: Health check update: 14 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.25183 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-94410-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-94410-10"}]': finished 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.25189 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-94564-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-94564-10"}]': finished 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.25018 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-94310-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.25330 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.25327 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-94281-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.25351 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-95462-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.24857 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-95542-1"}]': finished 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-4", "overlaypool": "test-rados-api-vm05-94573-4-cache"}]': finished 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-94338-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: osdmap e69: 8 total, 8 up, 8 in 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-4-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-95542-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1767113807' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm05-95462-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3600689769' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.25351 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm05-95462-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.25366 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:21:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:21:59 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:00.344 INFO:tasks.workunit.client.0.vm05.stdout:", "logtext":["threexx"]}]: dispatch 2026-03-09T20:22:00.345 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:56.453008+0000 client.admin [INF] threexx 2026-03-09T20:22:00.345 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-09T20:21:56.453404+0000 mon.a [INF] from='client.25012 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-09T20:22:00.345 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ OK ] LibRadosCmd.WatchLog (7827 ms) 2026-03-09T20:22:00.345 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [----------] 4 tests from LibRadosCmd (10379 ms total) 2026-03-09T20:22:00.345 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: 2026-03-09T20:22:00.345 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [----------] Global test environment tear-down 2026-03-09T20:22:00.345 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [==========] 4 tests from 1 test suite ran. (10379 ms total) 2026-03-09T20:22:00.345 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ PASSED ] 4 tests. 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: Running main() from gmock_main.cc 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [==========] Running 24 tests from 2 test suites. 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [----------] Global test environment set-up. 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [----------] 14 tests from LibRadosIo 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.SimpleWrite 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.SimpleWrite (565 ms) 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.TooBig 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.TooBig (0 ms) 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.ReadTimeout 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: no timeout :/ 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: no timeout :/ 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: no timeout :/ 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: no timeout :/ 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: no timeout :/ 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.ReadTimeout (55 ms) 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.RoundTrip 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.RoundTrip (10 ms) 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.Checksum 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.Checksum (4 ms) 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.OverlappingWriteRoundTrip 2026-03-09T20:22:00.428 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.OverlappingWriteRoundTrip (3 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.WriteFullRoundTrip 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.WriteFullRoundTrip (3 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.AppendRoundTrip 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.AppendRoundTrip (6 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.ZeroLenZero 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.ZeroLenZero (2 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.TruncTest 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.TruncTest (8 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.RemoveTest 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.RemoveTest (2 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.XattrsRoundTrip 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.XattrsRoundTrip (6 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.RmXattr 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.RmXattr (22 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.XattrIter 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.XattrIter (14 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [----------] 14 tests from LibRadosIo (700 ms total) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [----------] 10 tests from LibRadosIoEC 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.SimpleWrite 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.SimpleWrite (2191 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.RoundTrip 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.RoundTrip (7 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.OverlappingWriteRoundTrip 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.OverlappingWriteRoundTrip (7 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.WriteFullRoundTrip 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.WriteFullRoundTrip (5 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.AppendRoundTrip 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.AppendRoundTrip (13 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.TruncTest 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.TruncTest (6 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.RemoveTest 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.RemoveTest (5 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.XattrsRoundTrip 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.XattrsRoundTrip (4 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.RmXattr 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.RmXattr (13 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.XattrIter 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.XattrIter (8 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [----------] 10 tests from LibRadosIoEC (2259 ms total) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [--- api_stat: Running main() from gmock_main.cc 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [==========] Running 9 tests from 2 test suites. 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [----------] Global test environment set-up. 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [----------] 5 tests from LibRadosStat 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStat.Stat 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStat.Stat (430 ms) 2026-03-09T20:22:00.429 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStat.Stat2 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStat.Stat2 (129 ms) 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStat.StatNS 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStat.StatNS (35 ms) 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStat.ClusterStat 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStat.ClusterStat (0 ms) 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStat.PoolStat 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStat.PoolStat (7 ms) 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [----------] 5 tests from LibRadosStat (601 ms total) 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [----------] 4 tests from LibRadosStatEC 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStatEC.Stat 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStatEC.Stat (2139 ms) 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStatEC.StatNS 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStatEC.StatNS (48 ms) 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStatEC.ClusterStat 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStatEC.ClusterStat (0 ms) 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStatEC.PoolStat 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStatEC.PoolStat (6 ms) 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [----------] 4 tests from LibRadosStatEC (2193 ms total) 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [----------] Global test environment tear-down 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [==========] 9 tests from 2 test suites ran. (10521 ms total) 2026-03-09T20:22:00.430 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ PASSED ] 9 tests. 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: Running main() from gmock_main.cc 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [==========] Running 9 tests from 2 test suites. 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [----------] Global test environment set-up. 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [----------] 5 tests from LibRadosStatPP 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: seed 94776 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.StatPP 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatPP.StatPP (607 ms) 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.Stat2Mtime2PP 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatPP.Stat2Mtime2PP (8 ms) 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.ClusterStatPP 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatPP.ClusterStatPP (1 ms) 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.PoolStatPP 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatPP.PoolStatPP (4 ms) 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.StatPPNS 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatPP.StatPPNS (15 ms) 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [----------] 5 tests from LibRadosStatPP (635 ms total) 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [----------] 4 tests from LibRadosStatECPP 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.StatPP 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.StatPP (2126 ms) 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.ClusterStatPP 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.ClusterStatPP (1 ms) 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.PoolStatPP 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.PoolStatPP (47 ms) 2026-03-09T20:22:00.433 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.StatPPNS 2026-03-09T20:22:00.434 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.StatPPNS (12 ms) 2026-03-09T20:22:00.434 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [----------] 4 tests from LibRadosStatECPP (2186 ms total) 2026-03-09T20:22:00.434 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: 2026-03-09T20:22:00.434 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [----------] Global test environment tear-down 2026-03-09T20:22:00.434 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [==========] 9 tests from 2 test suites ran. (10498 ms total) 2026-03-09T20:22:00.434 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ PASSED ] 9 tests. 2026-03-09T20:22:00.436 INFO:tasks.workunit.client.0.vm05.stdout:-------] Global test environment tear-down 2026-03-09T20:22:00.436 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [==========] 24 tests from 2 test suites ran. (10754 ms total) 2026-03-09T20:22:00.436 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ PASSED ] 24 tests. 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: Running main() from gmock_main.cc 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: [==========] Running 3 tests from 1 test suite. 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: [----------] Global test environment set-up. 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: [----------] 3 tests from NeoradosList 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: [ RUN ] NeoradosList.ListObjects 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: [ OK ] NeoradosList.ListObjects (2844 ms) 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: [ RUN ] NeoradosList.ListObjectsNS 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: [ OK ] NeoradosList.ListObjectsNS (3920 ms) 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: [ RUN ] NeoradosList.ListObjectsMany 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: [ OK ] NeoradosList.ListObjectsMany (3068 ms) 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: [----------] 3 tests from NeoradosList (9832 ms total) 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: [----------] Global test environment tear-down 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: [==========] 3 tests from 1 test suite ran. (9840 ms total) 2026-03-09T20:22:00.474 INFO:tasks.workunit.client.0.vm05.stdout: list: [ PASSED ] 3 tests. 2026-03-09T20:22:00.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: 16.7 deep-scrub starts 2026-03-09T20:22:00.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: 16.7 deep-scrub ok 2026-03-09T20:22:00.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: 16.5 deep-scrub starts 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: 16.5 deep-scrub ok 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-4-cache", "mode": "writeback"}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-95542-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.25006 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.25021 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.25366 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.24967 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: osdmap e70: 8 total, 8 up, 8 in 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-95542-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-95542-2"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94822-6", "pg_num": 4}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94822-6", "pg_num": 4}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-4"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-4"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: 16.7 deep-scrub starts 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: 16.7 deep-scrub ok 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: 16.5 deep-scrub starts 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: 16.5 deep-scrub ok 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-4-cache", "mode": "writeback"}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-95542-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.25006 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.25021 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.25366 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.24967 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]': finished 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: osdmap e70: 8 total, 8 up, 8 in 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-95542-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-95542-2"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94822-6", "pg_num": 4}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94822-6", "pg_num": 4}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-4"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-4"}]: dispatch 2026-03-09T20:22:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: 16.7 deep-scrub starts 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: 16.7 deep-scrub ok 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: 16.5 deep-scrub starts 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: 16.5 deep-scrub ok 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-4-cache", "mode": "writeback"}]': finished 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-95542-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.25006 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-94771-7"}]': finished 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.25021 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-94776-7"}]': finished 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.25366 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.24967 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-94350-16"}]': finished 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2315717979' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:22:00.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2357293795' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:22:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1240899669' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:22:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: osdmap e70: 8 total, 8 up, 8 in 2026-03-09T20:22:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-95542-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-95542-2"}]: dispatch 2026-03-09T20:22:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.24967 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]: dispatch 2026-03-09T20:22:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.25006 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]: dispatch 2026-03-09T20:22:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.25021 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]: dispatch 2026-03-09T20:22:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94822-6", "pg_num": 4}]: dispatch 2026-03-09T20:22:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94822-6", "pg_num": 4}]: dispatch 2026-03-09T20:22:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-4"}]: dispatch 2026-03-09T20:22:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-4"}]: dispatch 2026-03-09T20:22:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:01.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: 16.8 deep-scrub starts 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: 16.8 deep-scrub ok 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: pgmap v52: 852 pgs: 32 creating+peering, 336 unknown, 484 active+clean; 459 KiB data, 348 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25351 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm05-95462-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-95462-2"}]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.24967 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25006 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25021 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94822-6", "pg_num": 4}]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-4"}]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: osdmap e71: 8 total, 8 up, 8 in 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/636944903' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-94281-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25232 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-94281-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1725649079' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25411 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-95542-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-95542-2"}]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25232 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-94281-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25411 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25435 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: osdmap e72: 8 total, 8 up, 8 in 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-94855-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2970489655' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-94855-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25417 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "overlaypool": "test-rados-api-vm05-94822-6"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "overlaypool": "test-rados-api-vm05-94822-6"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: 16.8 deep-scrub starts 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: 16.8 deep-scrub ok 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: pgmap v52: 852 pgs: 32 creating+peering, 336 unknown, 484 active+clean; 459 KiB data, 348 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25351 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm05-95462-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-95462-2"}]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.24967 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25006 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25021 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94822-6", "pg_num": 4}]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-4"}]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: osdmap e71: 8 total, 8 up, 8 in 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/636944903' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-94281-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25232 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-94281-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1725649079' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25411 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-95542-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-95542-2"}]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25232 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-94281-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25411 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25435 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: osdmap e72: 8 total, 8 up, 8 in 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-94855-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2970489655' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-94855-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25417 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "overlaypool": "test-rados-api-vm05-94822-6"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "overlaypool": "test-rados-api-vm05-94822-6"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:01.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:01.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: 16.8 deep-scrub starts 2026-03-09T20:22:01.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: 16.8 deep-scrub ok 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: pgmap v52: 852 pgs: 32 creating+peering, 336 unknown, 484 active+clean; 459 KiB data, 348 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25351 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm05-95462-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-95462-2"}]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.24967 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-94350-16"}]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25006 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-94771-7"}]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25021 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-94776-7"}]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94822-6", "pg_num": 4}]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-4"}]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: osdmap e71: 8 total, 8 up, 8 in 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/696721188' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2289751889' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/636944903' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-94281-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25003 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25232 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-94281-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1725649079' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25411 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-95542-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-95542-2"}]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25003 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.24905 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-4", "tierpool": "test-rados-api-vm05-94573-4-cache"}]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25232 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-94281-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25411 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25435 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: osdmap e72: 8 total, 8 up, 8 in 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-94855-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2970489655' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-94855-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25417 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "overlaypool": "test-rados-api-vm05-94822-6"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "overlaypool": "test-rados-api-vm05-94822-6"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.25417 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "overlaypool": "test-rados-api-vm05-94822-6"}]': finished 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.25465 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: osdmap e73: 8 total, 8 up, 8 in 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3767048403' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-94655-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1767113807' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.25459 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-94655-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.25351 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.25468 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94822-6", "mode": "writeback"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94822-6", "mode": "writeback"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[61345]: pgmap v56: 752 pgs: 1 active+clean+snaptrim, 268 unknown, 483 active+clean; 144 MiB data, 797 MiB used, 159 GiB / 160 GiB avail; 13 MiB/s rd, 39 MiB/s wr, 44 op/s 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.25417 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "overlaypool": "test-rados-api-vm05-94822-6"}]': finished 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.25465 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: osdmap e73: 8 total, 8 up, 8 in 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3767048403' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-94655-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1767113807' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.25459 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-94655-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.25351 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.25468 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94822-6", "mode": "writeback"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94822-6", "mode": "writeback"}]: dispatch 2026-03-09T20:22:03.411 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:03 vm05 ceph-mon[51870]: pgmap v56: 752 pgs: 1 active+clean+snaptrim, 268 unknown, 483 active+clean; 144 MiB data, 797 MiB used, 159 GiB / 160 GiB avail; 13 MiB/s rd, 39 MiB/s wr, 44 op/s 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.25417 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "overlaypool": "test-rados-api-vm05-94822-6"}]': finished 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.25465 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-94338-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: osdmap e73: 8 total, 8 up, 8 in 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3767048403' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-94655-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1767113807' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.25459 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-94655-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.25351 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.25468 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94822-6", "mode": "writeback"}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94822-6", "mode": "writeback"}]: dispatch 2026-03-09T20:22:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:03 vm09 ceph-mon[54524]: pgmap v56: 752 pgs: 1 active+clean+snaptrim, 268 unknown, 483 active+clean; 144 MiB data, 797 MiB used, 159 GiB / 160 GiB avail; 13 MiB/s rd, 39 MiB/s wr, 44 op/s 2026-03-09T20:22:04.172 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: Running main() from gmock_main.cc 2026-03-09T20:22:04.172 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [==========] Running 2 tests from 1 test suite. 2026-03-09T20:22:04.172 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [----------] Global test environment set-up. 2026-03-09T20:22:04.172 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [----------] 2 tests from NeoRadosECIo 2026-03-09T20:22:04.172 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [ RUN ] NeoRadosECIo.SimpleWrite 2026-03-09T20:22:04.172 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [ OK ] NeoRadosECIo.SimpleWrite (6807 ms) 2026-03-09T20:22:04.172 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [ RUN ] NeoRadosECIo.ReadOp 2026-03-09T20:22:04.172 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [ OK ] NeoRadosECIo.ReadOp (6806 ms) 2026-03-09T20:22:04.172 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [----------] 2 tests from NeoRadosECIo (13613 ms total) 2026-03-09T20:22:04.172 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: 2026-03-09T20:22:04.172 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [----------] Global test environment tear-down 2026-03-09T20:22:04.172 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [==========] 2 tests from 1 test suite ran. (13613 ms total) 2026-03-09T20:22:04.172 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [ PASSED ] 2 tests. 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: Running main() from gmock_main.cc 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [==========] Running 16 tests from 2 test suites. 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [----------] Global test environment set-up. 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [----------] 8 tests from LibRadosLock 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.LockExclusive 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.LockExclusive (575 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.LockShared 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.LockShared (7 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.LockExclusiveDur 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.LockExclusiveDur (1016 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.LockSharedDur 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.LockSharedDur (1017 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.LockMayRenew 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.LockMayRenew (8 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.Unlock 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.Unlock (7 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.ListLockers 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.ListLockers (7 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.BreakLock 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.BreakLock (4 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [----------] 8 tests from LibRadosLock (2641 ms total) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [----------] 8 tests from LibRadosLockEC 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.LockExclusive 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.LockExclusive (1105 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.LockShared 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.LockShared (22 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.LockExclusiveDur 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.LockExclusiveDur (1063 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.LockSharedDur 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.LockSharedDur (1006 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.LockMayRenew 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.LockMayRenew (4 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.Unlock 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.Unlock (5 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.ListLockers 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.ListLockers (5 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.BreakLock 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.BreakLock (3 ms) 2026-03-09T20:22:04.179 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [----------] 8 tests from LibRadosLockEC (3213 ms total) 2026-03-09T20:22:04.180 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: 2026-03-09T20:22:04.180 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [----------] Global test environment tear-down 2026-03-09T20:22:04.180 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [==========] 16 tests from 2 test suites ran. (14473 ms total) 2026-03-09T20:22:04.180 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ PASSED ] 16 tests. 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: Running main() from gmock_main.cc 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [==========] Running 16 tests from 2 test suites. 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [----------] Global test environment set-up. 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockPP 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: seed 94564 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockExclusivePP 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockExclusivePP (535 ms) 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockSharedPP 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockSharedPP (9 ms) 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockExclusiveDurPP 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockExclusiveDurPP (1011 ms) 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockSharedDurPP 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockSharedDurPP (1008 ms) 2026-03-09T20:22:04.183 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockMayRenewPP 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockMayRenewPP (9 ms) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.UnlockPP 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.UnlockPP (10 ms) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.ListLockersPP 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.ListLockersPP (5 ms) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.BreakLockPP 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.BreakLockPP (5 ms) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockPP (2592 ms total) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockECPP 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockExclusivePP 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockExclusivePP (1074 ms) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockSharedPP 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockSharedPP (29 ms) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockExclusiveDurPP 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockExclusiveDurPP (1135 ms) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockSharedDurPP 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockSharedDurPP (1005 ms) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockMayRenewPP 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockMayRenewPP (4 ms) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.UnlockPP 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.UnlockPP (4 ms) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.ListLockersPP 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.ListLockersPP (4 ms) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.BreakLockPP 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.BreakLockPP (4 ms) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockECPP (3259 ms total) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [----------] Global test environment tear-down 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [==========] 16 tests from 2 test suites ran. (14403 ms total) 2026-03-09T20:22:04.184 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ PASSED ] 16 tests. 2026-03-09T20:22:04.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.25435 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-94855-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.25189 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.25459 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-94655-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.25351 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-95462-2"}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.25468 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.25183 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94822-6", "mode": "writeback"}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1767113807' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: osdmap e74: 8 total, 8 up, 8 in 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3093820679' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3343522874' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-94281-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.25351 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.25477 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.25483 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-94281-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-95542-2"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[61345]: from='client.25468 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.25435 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-94855-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.25189 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.25459 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-94655-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.25351 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-95462-2"}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.25468 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.25183 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94822-6", "mode": "writeback"}]': finished 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1767113807' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: osdmap e74: 8 total, 8 up, 8 in 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3093820679' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3343522874' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-94281-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.25351 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.25477 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.25483 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-94281-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-95542-2"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:04 vm05 ceph-mon[51870]: from='client.25468 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.25435 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-94855-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]': finished 2026-03-09T20:22:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.25189 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-94564-10"}]': finished 2026-03-09T20:22:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.25459 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-94655-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.25351 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-95462-2"}]': finished 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.25468 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.25183 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-94410-10"}]': finished 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94822-6", "mode": "writeback"}]': finished 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3781038933' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1614400225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1767113807' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: osdmap e74: 8 total, 8 up, 8 in 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3093820679' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3343522874' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-94281-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.25183 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.25189 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.25351 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-95462-2"}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.25477 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.25483 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-94281-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-95542-2"}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:04 vm09 ceph-mon[54524]: from='client.25468 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:05.179 INFO:tasks.workunit.client.0.vm05.stdout: pool: Running main() from gmock_main.cc 2026-03-09T20:22:05.179 INFO:tasks.workunit.client.0.vm05.stdout: pool: [==========] Running 6 tests from 1 test suite. 2026-03-09T20:22:05.179 INFO:tasks.workunit.client.0.vm05.stdout: pool: [----------] Global test environment set-up. 2026-03-09T20:22:05.179 INFO:tasks.workunit.client.0.vm05.stdout: pool: [----------] 6 tests from NeoRadosPools 2026-03-09T20:22:05.179 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ RUN ] NeoRadosPools.PoolList 2026-03-09T20:22:05.179 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ OK ] NeoRadosPools.PoolList (1747 ms) 2026-03-09T20:22:05.179 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ RUN ] NeoRadosPools.PoolLookup 2026-03-09T20:22:05.179 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ OK ] NeoRadosPools.PoolLookup (1958 ms) 2026-03-09T20:22:05.179 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ RUN ] NeoRadosPools.PoolLookupOtherInstance 2026-03-09T20:22:05.179 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ OK ] NeoRadosPools.PoolLookupOtherInstance (2906 ms) 2026-03-09T20:22:05.179 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ RUN ] NeoRadosPools.PoolDelete 2026-03-09T20:22:05.180 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ OK ] NeoRadosPools.PoolDelete (3619 ms) 2026-03-09T20:22:05.180 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ RUN ] NeoRadosPools.PoolCreateDelete 2026-03-09T20:22:05.180 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ OK ] NeoRadosPools.PoolCreateDelete (2141 ms) 2026-03-09T20:22:05.180 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ RUN ] NeoRadosPools.PoolCreateWithCrushRule 2026-03-09T20:22:05.180 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ OK ] NeoRadosPools.PoolCreateWithCrushRule (2028 ms) 2026-03-09T20:22:05.180 INFO:tasks.workunit.client.0.vm05.stdout: pool: [----------] 6 tests from NeoRadosPools (14399 ms total) 2026-03-09T20:22:05.180 INFO:tasks.workunit.client.0.vm05.stdout: pool: 2026-03-09T20:22:05.180 INFO:tasks.workunit.client.0.vm05.stdout: pool: [----------] Global test environment tear-down 2026-03-09T20:22:05.180 INFO:tasks.workunit.client.0.vm05.stdout: pool: [==========] 6 tests from 1 test suite ran. (14399 ms total) 2026-03-09T20:22:05.180 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ PASSED ] 6 tests. 2026-03-09T20:22:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.25465 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.25183 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]': finished 2026-03-09T20:22:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.25189 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]': finished 2026-03-09T20:22:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.25351 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-95462-2"}]': finished 2026-03-09T20:22:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.25477 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.25483 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-94281-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-95542-2"}]': finished 2026-03-09T20:22:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.25468 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: osdmap e75: 8 total, 8 up, 8 in 2026-03-09T20:22:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/408325252' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app2"}]: dispatch 2026-03-09T20:22:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: pgmap v59: 728 pgs: 1 active+clean+snaptrim, 308 unknown, 419 active+clean; 144 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-95542-2"}]: dispatch 2026-03-09T20:22:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.25492 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.25468 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:05.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:22:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:22:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.25465 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.25183 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]': finished 2026-03-09T20:22:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.25189 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]': finished 2026-03-09T20:22:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.25351 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-95462-2"}]': finished 2026-03-09T20:22:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.25477 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.25483 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-94281-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-95542-2"}]': finished 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.25468 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: osdmap e75: 8 total, 8 up, 8 in 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/408325252' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app2"}]: dispatch 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: pgmap v59: 728 pgs: 1 active+clean+snaptrim, 308 unknown, 419 active+clean; 144 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-95542-2"}]: dispatch 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.25492 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.25468 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.25465 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-94338-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.25183 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-94410-10"}]': finished 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.25189 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-94564-10"}]': finished 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.25351 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-95462-2"}]': finished 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.25477 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.25483 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-94281-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-95542-2"}]': finished 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.25468 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: osdmap e75: 8 total, 8 up, 8 in 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/408325252' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app2"}]: dispatch 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: pgmap v59: 728 pgs: 1 active+clean+snaptrim, 308 unknown, 419 active+clean; 144 MiB data, 797 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-95542-2"}]: dispatch 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.25492 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.25468 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: Running main() from gmock_main.cc 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [==========] Running 16 tests from 2 test suites. 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [----------] Global test environment set-up. 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [----------] 2 tests from LibRadosWatchNotifyECPP 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyECPP.WatchNotify 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyECPP.WatchNotify (1334 ms) 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyECPP.WatchNotifyTimeout 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyECPP.WatchNotifyTimeout (6 ms) 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [----------] 2 tests from LibRadosWatchNotifyECPP (1340 ms total) 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [----------] 14 tests from LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/0 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/0 (132 ms) 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/1 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/1 (3852 ms) 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/0 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/0 (9 ms) 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/1 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/1 (4 ms) 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/0 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 94756211583312 notify_id 317827579906 notifier_gid 25202 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/0 (5 ms) 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/1 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 94756211583312 notify_id 317827579907 notifier_gid 25202 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/1 (4 ms) 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/0 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 94756211583168 notify_id 317827579908 notifier_gid 25202 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/0 (7 ms) 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/1 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 94756211583168 notify_id 317827579906 notifier_gid 25202 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/1 (6 ms) 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/0 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 94756211583136 notify_id 317827579909 notifier_gid 25202 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/0 (4 ms) 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/1 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 94756211587552 notify_id 317827579910 notifier_gid 25202 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/1 (6 ms) 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/0 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: trying... 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 94756211585440 notify_id 317827579907 notifier_gid 25202 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: timed out 2026-03-09T20:22:06.418 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: flushing 2026-03-09T20:22:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: Health check update: 10 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-95542-2"}]': finished 2026-03-09T20:22:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25492 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25468 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T20:22:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T20:22:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2023789741' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-95542-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: osdmap e76: 8 total, 8 up, 8 in 2026-03-09T20:22:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25468 ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25528 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-95542-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: pool 'PoolQuotaPP_vm05-94310-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25468 ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25528 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-95542-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25435 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]': finished 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: osdmap e77: 8 total, 8 up, 8 in 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2023789741' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-95542-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4061642698' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/886020232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-94758-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25528 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-95542-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25531 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25537 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-94758-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1185082632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-94281-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25468 ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:06 vm09 ceph-mon[54524]: from='client.25292 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-94281-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: Health check update: 10 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:06.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-95542-2"}]': finished 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25492 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25468 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2023789741' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-95542-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: osdmap e76: 8 total, 8 up, 8 in 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25468 ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25528 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-95542-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: pool 'PoolQuotaPP_vm05-94310-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25468 ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25528 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-95542-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25435 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]': finished 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: osdmap e77: 8 total, 8 up, 8 in 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2023789741' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-95542-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4061642698' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/886020232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-94758-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25528 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-95542-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25531 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25537 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-94758-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1185082632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-94281-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25468 ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[61345]: from='client.25292 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-94281-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: Health check update: 10 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3819866823' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-95542-2"}]': finished 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25492 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25468 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-94559-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2023789741' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-95542-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: osdmap e76: 8 total, 8 up, 8 in 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25468 ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25528 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-95542-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: pool 'PoolQuotaPP_vm05-94310-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25468 ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25528 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-95542-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25435 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-94855-12"}]': finished 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: osdmap e77: 8 total, 8 up, 8 in 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2519115841' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2023789741' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-95542-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4061642698' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/886020232' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-94758-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25435 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]: dispatch 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25528 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-95542-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25531 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25537 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-94758-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1185082632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-94281-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25468 ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:06.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:06 vm05 ceph-mon[51870]: from='client.25292 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-94281-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:07.162 INFO:tasks.workunit.client.0.vm05.stdout: api_watc api_watch_notify: Running main() from gmock_main.cc 2026-03-09T20:22:07.162 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [==========] Running 11 tests from 2 test suites. 2026-03-09T20:22:07.162 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [----------] Global test environment set-up. 2026-03-09T20:22:07.162 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [----------] 10 tests from LibRadosWatchNotify 2026-03-09T20:22:07.162 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify 2026-03-09T20:22:07.162 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify_test_cb 2026-03-09T20:22:07.162 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify (659 ms) 2026-03-09T20:22:07.162 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.Watch2Delete 2026-03-09T20:22:07.162 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94359175940624 err -107 2026-03-09T20:22:07.162 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: waiting up to 300 for disconnect notification ... 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.Watch2Delete (43 ms) 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchDelete 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94359175946864 err -107 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: waiting up to 300 for disconnect notification ... 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchDelete (24 ms) 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24781 notify_id 270582939648 cookie 94359175966032 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2 (15 ms) 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchNotify2 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24781 notify_id 270582939648 cookie 94359175969088 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchNotify2 (12 ms) 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioNotify 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24781 notify_id 270582939648 cookie 94359175972688 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioNotify (17 ms) 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2Multi 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24781 notify_id 270582939648 cookie 94359175990752 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24781 notify_id 270582939648 cookie 94359175993008 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2Multi (14 ms) 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2Timeout 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24781 notify_id 270582939649 cookie 94359175990752 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24781 notify_id 274877906946 cookie 94359175990752 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2Timeout (3009 ms) 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.Watch3Timeout 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: waiting up to 1024 for osd to time us out ... 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94359175990752 err -107 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24781 notify_id 300647710722 cookie 94359175990752 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.Watch3Timeout (5012 ms) 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchDelete2 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94359175990752 err -107 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: waiting up to 30 for disconnect notification ... 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchDelete2 (5 ms) 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [----------] 10 tests from LibRadosWatchNotify (8810 ms total) 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [----------] 1 test from LibRadosWatchNotifyEC 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotifyEC.WatchNotify 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify_test_cb 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotifyEC.WatchNotify (1164 ms) 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [----------] 1 test from LibRadosWatchNotifyEC (1164 ms total) 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [----------] Global test environment tear-down 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [==========] 11 tests from 2 test suites ran. (17198 ms total) 2026-03-09T20:22:07.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ PASSED ] 11 tests. 2026-03-09T20:22:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:07 vm09 ceph-mon[54524]: pgmap v62: 648 pgs: 48 creating+peering, 144 unknown, 456 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.5 KiB/s wr, 5 op/s 2026-03-09T20:22:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:07 vm09 ceph-mon[54524]: from='client.25435 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]': finished 2026-03-09T20:22:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:07 vm05 ceph-mon[61345]: pgmap v62: 648 pgs: 48 creating+peering, 144 unknown, 456 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.5 KiB/s wr, 5 op/s 2026-03-09T20:22:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:07 vm05 ceph-mon[61345]: from='client.25435 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]': finished 2026-03-09T20:22:07.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:07 vm05 ceph-mon[51870]: pgmap v62: 648 pgs: 48 creating+peering, 144 unknown, 456 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.5 KiB/s wr, 5 op/s 2026-03-09T20:22:07.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:07.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:07 vm05 ceph-mon[51870]: from='client.25435 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-94855-12"}]': finished 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: ] LibRadosIoECPP.RoundTripPP2 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RoundTripPP2 (5 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.OverlappingWriteRoundTripPP 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.OverlappingWriteRoundTripPP (5 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.WriteFullRoundTripPP 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.WriteFullRoundTripPP (4 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.WriteFullRoundTripPP2 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.WriteFullRoundTripPP2 (3 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.AppendRoundTripPP 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.AppendRoundTripPP (7 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.TruncTestPP 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.TruncTestPP (5 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RemoveTestPP 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RemoveTestPP (3 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.XattrsRoundTripPP 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.XattrsRoundTripPP (4 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RmXattrPP 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RmXattrPP (12 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CrcZeroWrite 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CrcZeroWrite (6412 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.XattrListPP 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.XattrListPP (1194 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtPP 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtPP (5 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtDNEPP 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtDNEPP (3 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtMismatchPP 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtMismatchPP (10 ms) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [----------] 18 tests from LibRadosIoECPP (9882 ms total) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [----------] Global test environment tear-down 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [==========] 39 tests from 2 test suites ran. (18533 ms total) 2026-03-09T20:22:08.187 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ PASSED ] 39 tests. 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25531 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25537 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-94758-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25468 ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25465 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25292 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-94281-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2071247484' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: osdmap e78: 8 total, 8 up, 8 in 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25468 ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25558 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25528 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-95542-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-95542-3"}]': finished 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25465 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25468 ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1"}]': finished 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.25558 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: osdmap e79: 8 total, 8 up, 8 in 2026-03-09T20:22:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25531 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25537 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-94758-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25468 ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25465 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25292 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-94281-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2071247484' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: osdmap e78: 8 total, 8 up, 8 in 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25468 ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25558 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25528 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-95542-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-95542-3"}]': finished 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25465 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25468 ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1"}]': finished 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.25558 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: osdmap e79: 8 total, 8 up, 8 in 2026-03-09T20:22:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25531 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25537 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-94758-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25468 ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25465 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25292 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-94281-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/749287343' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/101270927' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2071247484' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: osdmap e78: 8 total, 8 up, 8 in 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25465 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]: dispatch 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25468 ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25558 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25528 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-95542-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-95542-3"}]': finished 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25465 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-94338-23"}]': finished 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25468 ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-94559-1","app":"app1","key":"key1"}]': finished 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.25558 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: osdmap e79: 8 total, 8 up, 8 in 2026-03-09T20:22:08.562 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:22:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:22:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:22:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:09 vm05 ceph-mon[51870]: pgmap v65: 720 pgs: 25 creating+peering, 239 unknown, 456 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.7 KiB/s wr, 7 op/s 2026-03-09T20:22:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:09 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:09 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:09.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:09 vm05 ceph-mon[61345]: pgmap v65: 720 pgs: 25 creating+peering, 239 unknown, 456 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.7 KiB/s wr, 7 op/s 2026-03-09T20:22:09.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:09.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:09 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:09.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:09 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:09.938 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:09 vm09 ceph-mon[54524]: pgmap v65: 720 pgs: 25 creating+peering, 239 unknown, 456 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 7.7 KiB/s wr, 7 op/s 2026-03-09T20:22:09.938 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:09.938 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:09 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:09.938 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:09 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-7"}]: dispatch 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: osdmap e80: 8 total, 8 up, 8 in 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4232111169' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-94281-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-7"}]: dispatch 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: from='client.25606 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-94281-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-7"}]': finished 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: from='client.25606 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-94281-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: osdmap e81: 8 total, 8 up, 8 in 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3075515612' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: from='client.26467 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:10.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:10 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-7"}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: osdmap e80: 8 total, 8 up, 8 in 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4232111169' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-94281-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-7"}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: from='client.25606 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-94281-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-7"}]': finished 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: from='client.25606 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-94281-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: osdmap e81: 8 total, 8 up, 8 in 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3075515612' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: from='client.26467 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-7"}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: osdmap e80: 8 total, 8 up, 8 in 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4232111169' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-94281-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-7"}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: from='client.25606 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-94281-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-7"}]': finished 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: from='client.25606 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-94281-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: osdmap e81: 8 total, 8 up, 8 in 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3075515612' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: from='client.26467 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:10 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[51870]: pgmap v67: 688 pgs: 25 creating+peering, 207 unknown, 456 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[51870]: Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[51870]: from='client.26467 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[51870]: osdmap e82: 8 total, 8 up, 8 in 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7"}]: dispatch 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7"}]: dispatch 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1464582487' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[51870]: from='client.26576 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[61345]: pgmap v67: 688 pgs: 25 creating+peering, 207 unknown, 456 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[61345]: Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[61345]: from='client.26467 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[61345]: osdmap e82: 8 total, 8 up, 8 in 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7"}]: dispatch 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7"}]: dispatch 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1464582487' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:11.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:11 vm05 ceph-mon[61345]: from='client.26576 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:12.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:11 vm09 ceph-mon[54524]: pgmap v67: 688 pgs: 25 creating+peering, 207 unknown, 456 active+clean; 144 MiB data, 837 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:12.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:11 vm09 ceph-mon[54524]: Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:12.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:12.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:11 vm09 ceph-mon[54524]: from='client.26467 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-95156-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:12.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:11 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:22:12.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:11 vm09 ceph-mon[54524]: osdmap e82: 8 total, 8 up, 8 in 2026-03-09T20:22:12.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7"}]: dispatch 2026-03-09T20:22:12.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:11 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7"}]: dispatch 2026-03-09T20:22:12.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1464582487' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:12.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:11 vm09 ceph-mon[54524]: from='client.26576 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:13.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:12 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:13.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:12 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:13.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:13.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:12 vm09 ceph-mon[54524]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T20:22:13.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:12 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7"}]': finished 2026-03-09T20:22:13.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:12 vm09 ceph-mon[54524]: from='client.26576 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:13.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:12 vm09 ceph-mon[54524]: osdmap e83: 8 total, 8 up, 8 in 2026-03-09T20:22:13.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2914963611' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-94281-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[51870]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7"}]': finished 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[51870]: from='client.26576 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[51870]: osdmap e83: 8 total, 8 up, 8 in 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2914963611' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-94281-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[61345]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-7"}]': finished 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[61345]: from='client.26576 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[61345]: osdmap e83: 8 total, 8 up, 8 in 2026-03-09T20:22:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2914963611' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-94281-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:14.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:13 vm09 ceph-mon[54524]: pgmap v70: 752 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 192 unknown, 549 active+clean; 144 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 3.2 KiB/s wr, 5 op/s 2026-03-09T20:22:14.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:14.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2914963611' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-94281-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:14.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:13 vm09 ceph-mon[54524]: osdmap e84: 8 total, 8 up, 8 in 2026-03-09T20:22:14.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2023789741' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:14.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:13 vm09 ceph-mon[54524]: from='client.25528 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:13 vm05 ceph-mon[51870]: pgmap v70: 752 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 192 unknown, 549 active+clean; 144 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 3.2 KiB/s wr, 5 op/s 2026-03-09T20:22:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2914963611' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-94281-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:13 vm05 ceph-mon[51870]: osdmap e84: 8 total, 8 up, 8 in 2026-03-09T20:22:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2023789741' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:13 vm05 ceph-mon[51870]: from='client.25528 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:13 vm05 ceph-mon[61345]: pgmap v70: 752 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 192 unknown, 549 active+clean; 144 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 3.2 KiB/s wr, 5 op/s 2026-03-09T20:22:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2914963611' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-94281-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:13 vm05 ceph-mon[61345]: osdmap e84: 8 total, 8 up, 8 in 2026-03-09T20:22:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2023789741' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:13 vm05 ceph-mon[61345]: from='client.25528 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: Running main() from gmock_main.cc 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [==========] Running 3 tests from 1 test suite. 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [----------] Global test environment set-up. 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [----------] 3 tests from NeoradosECList 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ RUN ] NeoradosECList.ListObjects 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ OK ] NeoradosECList.ListObjects (7703 ms) 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ RUN ] NeoradosECList.ListObjectsNS 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ OK ] NeoradosECList.ListObjectsNS (6804 ms) 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ RUN ] NeoradosECList.ListObjectsMany 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ OK ] NeoradosECList.ListObjectsMany (10292 ms) 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [----------] 3 tests from NeoradosECList (24799 ms total) 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [----------] Global test environment tear-down 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [==========] 3 tests from 1 test suite ran. (24799 ms total) 2026-03-09T20:22:15.456 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ PASSED ] 3 tests. 2026-03-09T20:22:15.475 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: Running main() from gmock_main.cc 2026-03-09T20:22:15.475 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [==========] Running 8 tests from 2 test suites. 2026-03-09T20:22:15.475 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [----------] Global test environment set-up. 2026-03-09T20:22:15.475 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [----------] 1 test from LibradosCWriteOps 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibradosCWriteOps.NewDelete 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibradosCWriteOps.NewDelete (0 ms) 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [----------] 1 test from LibradosCWriteOps (0 ms total) 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [----------] 7 tests from LibRadosCWriteOps 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.assertExists 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.assertExists (3312 ms) 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.WriteOpAssertVersion 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.WriteOpAssertVersion (3891 ms) 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Xattrs 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Xattrs (3024 ms) 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Write 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Write (2732 ms) 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Exec 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Exec (2858 ms) 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.WriteSame 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.WriteSame (3486 ms) 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.CmpExt 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.CmpExt (5957 ms) 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [----------] 7 tests from LibRadosCWriteOps (25260 ms total) 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [----------] Global test environment tear-down 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [==========] 8 tests from 2 test suites ran. (25260 ms total) 2026-03-09T20:22:15.476 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ PASSED ] 8 tests. 2026-03-09T20:22:15.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:15 vm09 ceph-mon[54524]: pgmap v73: 648 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 192 unknown, 445 active+clean; 144 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:22:15.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:15 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:15.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:15.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:15.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:15 vm09 ceph-mon[54524]: from='client.25528 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-95542-3"}]': finished 2026-03-09T20:22:15.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:15 vm09 ceph-mon[54524]: osdmap e85: 8 total, 8 up, 8 in 2026-03-09T20:22:15.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2023789741' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:15.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:15.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:15 vm09 ceph-mon[54524]: from='client.25528 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:15.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:15 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:15.522 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:22:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[51870]: pgmap v73: 648 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 192 unknown, 445 active+clean; 144 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[51870]: from='client.25528 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-95542-3"}]': finished 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[51870]: osdmap e85: 8 total, 8 up, 8 in 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2023789741' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[51870]: from='client.25528 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[61345]: pgmap v73: 648 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 192 unknown, 445 active+clean; 144 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[61345]: from='client.25528 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-95542-3"}]': finished 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[61345]: osdmap e85: 8 total, 8 up, 8 in 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2023789741' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[61345]: from='client.25528 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-95542-3"}]: dispatch 2026-03-09T20:22:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:15 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[51870]: from='client.25528 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-95542-3"}]': finished 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[51870]: osdmap e86: 8 total, 8 up, 8 in 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4196973593' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/68721679' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-94281-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[51870]: from='client.28103 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4"}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4"}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[51870]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[61345]: from='client.25528 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-95542-3"}]': finished 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[61345]: osdmap e86: 8 total, 8 up, 8 in 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4196973593' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/68721679' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-94281-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[61345]: from='client.28103 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4"}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4"}]: dispatch 2026-03-09T20:22:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:16 vm05 ceph-mon[61345]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:16 vm09 ceph-mon[54524]: from='client.25528 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-95542-3"}]': finished 2026-03-09T20:22:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:16 vm09 ceph-mon[54524]: osdmap e86: 8 total, 8 up, 8 in 2026-03-09T20:22:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4196973593' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/68721679' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-94281-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:16 vm09 ceph-mon[54524]: from='client.28103 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4"}]: dispatch 2026-03-09T20:22:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:16 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4"}]: dispatch 2026-03-09T20:22:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:16 vm09 ceph-mon[54524]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [==========] Running 4 tests from 1 test suite. 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [----------] Global test environment set-up. 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [----------] 4 tests from LibRadosService 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ RUN ] LibRadosService.RegisterEarly 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ OK ] LibRadosService.RegisterEarly (5070 ms) 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ RUN ] LibRadosService.RegisterLate 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ OK ] LibRadosService.RegisterLate (14 ms) 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ RUN ] LibRadosService.StatusFormat 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: cluster: 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: id: c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: health: HEALTH_WARN 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 14 pool(s) do not have an application enabled 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: services: 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: mon: 3 daemons, quorum a,b,c (age 3m) 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: mgr: y(active, since 71s), standbys: x 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: osd: 8 osds: 8 up (since 95s), 8 in (since 106s) 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: laundry: 2 daemons active (1 hosts) 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: rgw: 1 daemon active (1 hosts, 1 zones) 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: data: 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: pools: 29 pools, 796 pgs 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: objects: 199 objects, 455 KiB 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: usage: 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: pgs: 83.417% pgs unknown 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 664 unknown 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 132 active+clean 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: io: 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: client: 1.2 KiB/s rd, 1 op/s rd, 0 op/s wr 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: recovery: 116 B/s, 5 objects/s 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: cluster: 2026-03-09T20:22:17.548 INFO:tasks.workunit.client.0.vm05.stdout: api_service: id: c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: health: HEALTH_WARN 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 17 pool(s) do not have an application enabled 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: services: 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: mon: 3 daemons, quorum a,b,c (age 3m) 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: mgr: y(active, since 73s), standbys: x 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: osd: 8 osds: 8 up (since 97s), 8 in (since 108s) 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: foo: 16 portals active (1 hosts, 3 zones) 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: laundry: 1 daemon active (1 hosts) 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: rgw: 1 daemon active (1 hosts, 1 zones) 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: data: 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: pools: 33 pools, 900 pgs 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: objects: 246 objects, 459 KiB 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: usage: 348 MiB used, 160 GiB / 160 GiB avail 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: pgs: 21.333% pgs unknown 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 24.889% pgs not active 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 484 active+clean 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 224 creating+peering 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 192 unknown 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: io: 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: client: 3.0 KiB/s rd, 23 KiB/s wr, 236 op/s rd, 269 op/s wr 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ OK ] LibRadosService.StatusFormat (2411 ms) 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ RUN ] LibRadosService.Status 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ OK ] LibRadosService.Status (20023 ms) 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [----------] 4 tests from LibRadosService (27518 ms total) 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [----------] Global test environment tear-down 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [==========] 4 tests from 1 test suite ran. (27518 ms total) 2026-03-09T20:22:17.549 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ PASSED ] 4 tests. 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: pgmap v76: 648 pgs: 96 creating+peering, 2 active+clean+snaptrim, 64 unknown, 486 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/68721679' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-94281-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: from='client.28103 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4"}]': finished 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: osdmap e87: 8 total, 8 up, 8 in 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-9"}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3432671776' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-9"}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: from='client.29719 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6"}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6"}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: pgmap v76: 648 pgs: 96 creating+peering, 2 active+clean+snaptrim, 64 unknown, 486 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/68721679' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-94281-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: from='client.28103 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4"}]': finished 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: osdmap e87: 8 total, 8 up, 8 in 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-9"}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3432671776' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-9"}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: from='client.29719 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6"}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6"}]: dispatch 2026-03-09T20:22:17.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: pgmap v76: 648 pgs: 96 creating+peering, 2 active+clean+snaptrim, 64 unknown, 486 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/68721679' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-94281-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: from='client.28103 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4"}]': finished 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: osdmap e87: 8 total, 8 up, 8 in 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-9"}]: dispatch 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3432671776' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-9"}]: dispatch 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: from='client.29719 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6"}]: dispatch 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6"}]: dispatch 2026-03-09T20:22:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-9"}]': finished 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.29719 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6"}]': finished 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-9", "mode": "writeback"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: osdmap e88: 8 total, 8 up, 8 in 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-9", "mode": "writeback"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-94822-6", "pool2": "test-rados-api-vm05-94822-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-94822-6", "pool2": "test-rados-api-vm05-94822-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:22:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:22:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-9"}]': finished 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.29719 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6"}]': finished 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-9", "mode": "writeback"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: osdmap e88: 8 total, 8 up, 8 in 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-9", "mode": "writeback"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-94822-6", "pool2": "test-rados-api-vm05-94822-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-94822-6", "pool2": "test-rados-api-vm05-94822-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:18.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:18.939 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-9"}]': finished 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.29719 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-94822-4", "tierpool": "test-rados-api-vm05-94822-6"}]': finished 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-9", "mode": "writeback"}]: dispatch 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: osdmap e88: 8 total, 8 up, 8 in 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-9", "mode": "writeback"}]: dispatch 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3348390106' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-94822-6", "pool2": "test-rados-api-vm05-94822-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-94822-6", "pool2": "test-rados-api-vm05-94822-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:18.940 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:19.648 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: pgmap v79: 584 pgs: 32 creating+peering, 1 active+clean+snaptrim, 128 unknown, 423 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T20:22:19.648 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:19.648 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-9", "mode": "writeback"}]': finished 2026-03-09T20:22:19.648 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-94822-6", "pool2": "test-rados-api-vm05-94822-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T20:22:19.648 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: from='client.31108 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:19.648 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: osdmap e89: 8 total, 8 up, 8 in 2026-03-09T20:22:19.648 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-94655-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:19.648 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-94655-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:19.648 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1652230321' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:19.648 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: from='client.31303 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:19.648 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1637943590' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-94281-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:19.648 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: from='client.30584 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-94281-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:19.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:19.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:19.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:19.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: pgmap v79: 584 pgs: 32 creating+peering, 1 active+clean+snaptrim, 128 unknown, 423 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T20:22:19.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:19.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-9", "mode": "writeback"}]': finished 2026-03-09T20:22:19.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-94822-6", "pool2": "test-rados-api-vm05-94822-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T20:22:19.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: from='client.31108 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:19.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: osdmap e89: 8 total, 8 up, 8 in 2026-03-09T20:22:19.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-94655-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:19.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-94655-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:19.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1652230321' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:19.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: from='client.31303 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:19.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1637943590' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-94281-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:19.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: from='client.30584 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-94281-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:19.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:19.651 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:19.651 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout:h_notify_pp: flushed 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/0 (3004 ms) 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/1 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: trying... 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 94756211585440 notify_id 330712481800 notifier_gid 25202 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: timed out 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: flushing 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: flushed 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/1 (3004 ms) 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/0 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: List watches 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify2 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 94756211585440 notify_id 343597383689 notifier_gid 25202 2026-03-09T20:22:19.796 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify2 done 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: watch_check 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: unwatch2 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: flushing 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: done 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/0 (3527 ms) 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: List watches 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify2 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 94756211585440 notify_id 356482285578 notifier_gid 25202 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify2 done 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: watch_check 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: unwatch2 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: flushing 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: done 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 (3013 ms) 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [----------] 14 tests from LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP (16578 ms total) 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [----------] Global test environment tear-down 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [==========] 16 tests from 2 test suites ran. (29880 ms total) 2026-03-09T20:22:19.797 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ PASSED ] 16 tests. 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: pgmap v79: 584 pgs: 32 creating+peering, 1 active+clean+snaptrim, 128 unknown, 423 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-9", "mode": "writeback"}]': finished 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: from='client.25202 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-94822-6", "pool2": "test-rados-api-vm05-94822-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: from='client.31108 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: osdmap e89: 8 total, 8 up, 8 in 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-94655-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-94655-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1652230321' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: from='client.31303 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1637943590' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-94281-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: from='client.30584 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-94281-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:20 vm09 ceph-mon[54524]: from='client.31303 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:20 vm09 ceph-mon[54524]: from='client.30584 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-94281-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:20 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:22:21.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:20 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9"}]: dispatch 2026-03-09T20:22:21.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:20 vm09 ceph-mon[54524]: osdmap e90: 8 total, 8 up, 8 in 2026-03-09T20:22:21.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:20 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9"}]: dispatch 2026-03-09T20:22:21.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:20 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:21.163 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[51870]: from='client.31303 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:21.163 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[51870]: from='client.30584 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-94281-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:21.163 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:22:21.163 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9"}]: dispatch 2026-03-09T20:22:21.163 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[51870]: osdmap e90: 8 total, 8 up, 8 in 2026-03-09T20:22:21.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9"}]: dispatch 2026-03-09T20:22:21.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:21.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[61345]: from='client.31303 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-94310-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:21.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[61345]: from='client.30584 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-94281-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:21.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:22:21.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9"}]: dispatch 2026-03-09T20:22:21.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[61345]: osdmap e90: 8 total, 8 up, 8 in 2026-03-09T20:22:21.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9"}]: dispatch 2026-03-09T20:22:21.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:20 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [==========] Running 4 tests from 1 test suite. 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [----------] Global test environment set-up. 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [----------] 4 tests from LibRadosServicePP 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ RUN ] LibRadosServicePP.RegisterEarly 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ OK ] LibRadosServicePP.RegisterEarly (5099 ms) 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ RUN ] LibRadosServicePP.RegisterLate 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ OK ] LibRadosServicePP.RegisterLate (100 ms) 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ RUN ] LibRadosServicePP.Status 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ OK ] LibRadosServicePP.Status (20035 ms) 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ RUN ] LibRadosServicePP.Close 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: attempt 0 of 20 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ OK ] LibRadosServicePP.Close (6298 ms) 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [----------] 4 tests from LibRadosServicePP (31532 ms total) 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [----------] Global test environment tear-down 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [==========] 4 tests from 1 test suite ran. (31532 ms total) 2026-03-09T20:22:21.706 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ PASSED ] 4 tests. 2026-03-09T20:22:22.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:21 vm09 ceph-mon[54524]: pgmap v82: 580 pgs: 32 creating+peering, 1 active+clean+snaptrim, 160 unknown, 387 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:22.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:21 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:22:22.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:21 vm09 ceph-mon[54524]: from='client.31108 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-94655-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]': finished 2026-03-09T20:22:22.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:21 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9"}]': finished 2026-03-09T20:22:22.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:21 vm09 ceph-mon[54524]: osdmap e91: 8 total, 8 up, 8 in 2026-03-09T20:22:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:21 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:21 vm09 ceph-mon[54524]: osdmap e92: 8 total, 8 up, 8 in 2026-03-09T20:22:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1798635388' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T20:22:22.163 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[51870]: pgmap v82: 580 pgs: 32 creating+peering, 1 active+clean+snaptrim, 160 unknown, 387 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[51870]: from='client.31108 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-94655-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]': finished 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9"}]': finished 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[51870]: osdmap e91: 8 total, 8 up, 8 in 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[51870]: osdmap e92: 8 total, 8 up, 8 in 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1798635388' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[61345]: pgmap v82: 580 pgs: 32 creating+peering, 1 active+clean+snaptrim, 160 unknown, 387 active+clean; 144 MiB data, 898 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[61345]: from='client.31108 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-94655-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]': finished 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-9"}]': finished 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[61345]: osdmap e91: 8 total, 8 up, 8 in 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[61345]: osdmap e92: 8 total, 8 up, 8 in 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:22.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1798635388' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[51870]: osdmap e93: 8 total, 8 up, 8 in 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2811400070' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-94281-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[51870]: from='client.33199 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-94281-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1123494009' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-94310-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[61345]: osdmap e93: 8 total, 8 up, 8 in 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2811400070' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-94281-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[61345]: from='client.33199 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-94281-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1123494009' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-94310-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:22 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-09T20:22:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:22 vm09 ceph-mon[54524]: osdmap e93: 8 total, 8 up, 8 in 2026-03-09T20:22:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2811400070' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-94281-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:22 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:22 vm09 ceph-mon[54524]: from='client.33199 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-94281-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1123494009' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-94310-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:23.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: Running main() from gmock_main.cc 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [==========] Running 12 tests from 1 test suite. 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [----------] Global test environment set-up. 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [----------] 12 tests from NeoRadosMisc 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.Version 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.Version (1783 ms) 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.WaitOSDMap 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.WaitOSDMap (2022 ms) 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.LongName 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.LongName (3859 ms) 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.LongLocator 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.LongLocator (2607 ms) 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.LongNamespace 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.LongNamespace (3166 ms) 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.LongAttrName 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.LongAttrName (3005 ms) 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.Exec 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.Exec (3155 ms) 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.Operate1 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.Operate1 (3095 ms) 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.Operate2 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.Operate2 (3268 ms) 2026-03-09T20:22:24.143 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.BigObject 2026-03-09T20:22:24.144 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.BigObject (3020 ms) 2026-03-09T20:22:24.144 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.BigAttr 2026-03-09T20:22:24.144 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.BigAttr (1313 ms) 2026-03-09T20:22:24.144 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.WriteSame 2026-03-09T20:22:24.144 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.WriteSame (3108 ms) 2026-03-09T20:22:24.144 INFO:tasks.workunit.client.0.vm05.stdout: misc: [----------] 12 tests from NeoRadosMisc (33402 ms total) 2026-03-09T20:22:24.144 INFO:tasks.workunit.client.0.vm05.stdout: misc: 2026-03-09T20:22:24.144 INFO:tasks.workunit.client.0.vm05.stdout: misc: [----------] Global test environment tear-down 2026-03-09T20:22:24.144 INFO:tasks.workunit.client.0.vm05.stdout: misc: [==========] 12 tests from 1 test suite ran. (33402 ms total) 2026-03-09T20:22:24.144 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ PASSED ] 12 tests. 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[51870]: pgmap v86: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.0 KiB/s wr, 6 op/s 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[51870]: from='client.33199 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-94281-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1123494009' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-94310-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[51870]: osdmap e94: 8 total, 8 up, 8 in 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[61345]: pgmap v86: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.0 KiB/s wr, 6 op/s 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[61345]: from='client.33199 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-94281-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1123494009' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-94310-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[61345]: osdmap e94: 8 total, 8 up, 8 in 2026-03-09T20:22:24.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:24.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:23 vm09 ceph-mon[54524]: pgmap v86: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 4.0 KiB/s wr, 6 op/s 2026-03-09T20:22:24.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:23 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:23 vm09 ceph-mon[54524]: from='client.33199 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-94281-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94689-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1123494009' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-94310-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:23 vm09 ceph-mon[54524]: osdmap e94: 8 total, 8 up, 8 in 2026-03-09T20:22:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:25 vm05 ceph-mon[51870]: osdmap e95: 8 total, 8 up, 8 in 2026-03-09T20:22:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94689-10", "tierpool":"test-rados-api-vm05-94689-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T20:22:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:25 vm05 ceph-mon[51870]: pgmap v89: 556 pgs: 168 unknown, 388 active+clean; 144 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:25 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:25 vm05 ceph-mon[61345]: osdmap e95: 8 total, 8 up, 8 in 2026-03-09T20:22:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94689-10", "tierpool":"test-rados-api-vm05-94689-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T20:22:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:25 vm05 ceph-mon[61345]: pgmap v89: 556 pgs: 168 unknown, 388 active+clean; 144 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:25 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:25 vm09 ceph-mon[54524]: osdmap e95: 8 total, 8 up, 8 in 2026-03-09T20:22:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94689-10", "tierpool":"test-rados-api-vm05-94689-10-cache", "force_nonempty":""}]: dispatch 2026-03-09T20:22:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:25 vm09 ceph-mon[54524]: pgmap v89: 556 pgs: 168 unknown, 388 active+clean; 144 MiB data, 939 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:25 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:25.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:22:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:22:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94689-10", "tierpool":"test-rados-api-vm05-94689-10-cache", "force_nonempty":""}]': finished 2026-03-09T20:22:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:26 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:26 vm09 ceph-mon[54524]: osdmap e96: 8 total, 8 up, 8 in 2026-03-09T20:22:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94689-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-11"}]: dispatch 2026-03-09T20:22:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4209772938' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-94310-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:26 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-11"}]: dispatch 2026-03-09T20:22:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3115856566' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm05-94281-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:26 vm09 ceph-mon[54524]: from='client.35209 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-94310-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:26 vm09 ceph-mon[54524]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94689-10", "tierpool":"test-rados-api-vm05-94689-10-cache", "force_nonempty":""}]': finished 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[51870]: osdmap e96: 8 total, 8 up, 8 in 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94689-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-11"}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4209772938' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-94310-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-11"}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3115856566' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm05-94281-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[51870]: from='client.35209 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-94310-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[51870]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94689-10", "tierpool":"test-rados-api-vm05-94689-10-cache", "force_nonempty":""}]': finished 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[61345]: osdmap e96: 8 total, 8 up, 8 in 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94689-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-11"}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4209772938' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-94310-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-11"}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3115856566' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm05-94281-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[61345]: from='client.35209 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-94310-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:26 vm05 ceph-mon[61345]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:27.272 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:22:26 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=infra.usagestats t=2026-03-09T20:22:26.802994339Z level=info msg="Usage stats are ready to report" 2026-03-09T20:22:27.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94689-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-11"}]': finished 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3115856566' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm05-94281-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[51870]: from='client.35209 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-94310-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[51870]: osdmap e97: 8 total, 8 up, 8 in 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-11", "mode": "writeback"}]: dispatch 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[51870]: pgmap v92: 652 pgs: 32 unknown, 96 creating+peering, 22 creating+activating, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 496 active+clean; 144 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 5 op/s 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-11", "mode": "writeback"}]: dispatch 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94689-10", "tierpool":"test-rados-api-vm05-94689-10-cache"}]: dispatch 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94689-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-11"}]': finished 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3115856566' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm05-94281-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[61345]: from='client.35209 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-94310-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[61345]: osdmap e97: 8 total, 8 up, 8 in 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-11", "mode": "writeback"}]: dispatch 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[61345]: pgmap v92: 652 pgs: 32 unknown, 96 creating+peering, 22 creating+activating, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 496 active+clean; 144 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 5 op/s 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-11", "mode": "writeback"}]: dispatch 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94689-10", "tierpool":"test-rados-api-vm05-94689-10-cache"}]: dispatch 2026-03-09T20:22:27.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94689-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:27 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-11"}]': finished 2026-03-09T20:22:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3115856566' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm05-94281-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:27 vm09 ceph-mon[54524]: from='client.35209 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-94310-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:27 vm09 ceph-mon[54524]: osdmap e97: 8 total, 8 up, 8 in 2026-03-09T20:22:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-11", "mode": "writeback"}]: dispatch 2026-03-09T20:22:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:27 vm09 ceph-mon[54524]: pgmap v92: 652 pgs: 32 unknown, 96 creating+peering, 22 creating+activating, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 496 active+clean; 144 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 5 op/s 2026-03-09T20:22:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:27 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-11", "mode": "writeback"}]: dispatch 2026-03-09T20:22:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94689-10", "tierpool":"test-rados-api-vm05-94689-10-cache"}]: dispatch 2026-03-09T20:22:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-11", "mode": "writeback"}]': finished 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94689-10", "tierpool":"test-rados-api-vm05-94689-10-cache"}]': finished 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: osdmap e98: 8 total, 8 up, 8 in 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:22:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:22:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-11", "mode": "writeback"}]': finished 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94689-10", "tierpool":"test-rados-api-vm05-94689-10-cache"}]': finished 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: osdmap e98: 8 total, 8 up, 8 in 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:28.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T20:22:28.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T20:22:28.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T20:22:28.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T20:22:28.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T20:22:28.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T20:22:28.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T20:22:28.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T20:22:28.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T20:22:28.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T20:22:28.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:28.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-11", "mode": "writeback"}]': finished 2026-03-09T20:22:28.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1546130301' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94689-10", "tierpool":"test-rados-api-vm05-94689-10-cache"}]': finished 2026-03-09T20:22:28.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: osdmap e98: 8 total, 8 up, 8 in 2026-03-09T20:22:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T20:22:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T20:22:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T20:22:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T20:22:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T20:22:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T20:22:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T20:22:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T20:22:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T20:22:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: Running main() from gmock_main.cc 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [==========] Running 9 tests from 1 test suite. 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [----------] Global test environment set-up. 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [----------] 9 tests from LibRadosPools 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolList 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolList (2664 ms) 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookup 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolLookup (2973 ms) 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookup2 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolLookup2 (3897 ms) 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookupOtherInstance 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolLookupOtherInstance (2705 ms) 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolReverseLookupOtherInstance 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolReverseLookupOtherInstance (3073 ms) 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolDelete 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolDelete (5138 ms) 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolCreateDelete 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolCreateDelete (5202 ms) 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolCreateWithCrushRule 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolCreateWithCrushRule (5181 ms) 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolGetBaseTier 2026-03-09T20:22:29.433 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolGetBaseTier (8728 ms) 2026-03-09T20:22:29.434 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [----------] 9 tests from LibRadosPools (39561 ms total) 2026-03-09T20:22:29.434 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: 2026-03-09T20:22:29.434 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [----------] Global test environment tear-down 2026-03-09T20:22:29.434 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [==========] 9 tests from 1 test suite ran. (39561 ms total) 2026-03-09T20:22:29.434 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ PASSED ] 9 tests. 2026-03-09T20:22:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: pgmap v94: 556 pgs: 32 unknown, 22 creating+activating, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 496 active+clean; 144 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 KiB/s wr, 5 op/s 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: osdmap e99: 8 total, 8 up, 8 in 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/141835562' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-94310-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='client.37156 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-94310-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: 16.0 deep-scrub starts 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: 16.0 deep-scrub ok 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: 16.3 deep-scrub starts 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: 16.3 deep-scrub ok 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: 16.9 deep-scrub starts 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: 16.9 deep-scrub ok 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='client.37156 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-94310-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: osdmap e100: 8 total, 8 up, 8 in 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: pgmap v94: 556 pgs: 32 unknown, 22 creating+activating, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 496 active+clean; 144 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 KiB/s wr, 5 op/s 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: osdmap e99: 8 total, 8 up, 8 in 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/141835562' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-94310-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='client.37156 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-94310-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: 16.0 deep-scrub starts 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: 16.0 deep-scrub ok 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: 16.3 deep-scrub starts 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: 16.3 deep-scrub ok 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: 16.9 deep-scrub starts 2026-03-09T20:22:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: 16.9 deep-scrub ok 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='client.37156 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-94310-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: osdmap e100: 8 total, 8 up, 8 in 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: pgmap v94: 556 pgs: 32 unknown, 22 creating+activating, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 496 active+clean; 144 MiB data, 963 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 KiB/s wr, 5 op/s 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: osdmap e99: 8 total, 8 up, 8 in 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/141835562' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-94310-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='client.37156 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-94310-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: 16.0 deep-scrub starts 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: 16.0 deep-scrub ok 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: 16.3 deep-scrub starts 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: 16.3 deep-scrub ok 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: 16.9 deep-scrub starts 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: 16.9 deep-scrub ok 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='client.37156 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-94310-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: osdmap e100: 8 total, 8 up, 8 in 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:29.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11"}]: dispatch 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: 16.1 deep-scrub starts 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: 16.1 deep-scrub ok 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: 16.2 deep-scrub starts 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: 16.4 deep-scrub starts 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: 16.2 deep-scrub ok 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: 16.4 deep-scrub ok 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: 16.6 deep-scrub starts 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: 16.6 deep-scrub ok 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11"}]: dispatch 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2591376703' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-94281-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: 16.5 deep-scrub starts 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: 16.5 deep-scrub ok 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11"}]': finished 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2591376703' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-94281-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: osdmap e101: 8 total, 8 up, 8 in 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: 16.1 deep-scrub starts 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: 16.1 deep-scrub ok 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: 16.2 deep-scrub starts 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: 16.4 deep-scrub starts 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: 16.2 deep-scrub ok 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: 16.4 deep-scrub ok 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: 16.6 deep-scrub starts 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: 16.6 deep-scrub ok 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11"}]: dispatch 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2591376703' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-94281-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: 16.5 deep-scrub starts 2026-03-09T20:22:30.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: 16.5 deep-scrub ok 2026-03-09T20:22:30.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:22:30.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11"}]': finished 2026-03-09T20:22:30.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2591376703' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-94281-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:30.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: osdmap e101: 8 total, 8 up, 8 in 2026-03-09T20:22:30.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:30.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:30.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:31.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: 16.1 deep-scrub starts 2026-03-09T20:22:31.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: 16.1 deep-scrub ok 2026-03-09T20:22:31.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: 16.2 deep-scrub starts 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: 16.4 deep-scrub starts 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: 16.2 deep-scrub ok 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: 16.4 deep-scrub ok 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: 16.6 deep-scrub starts 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: 16.6 deep-scrub ok 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11"}]: dispatch 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2591376703' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-94281-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: 16.5 deep-scrub starts 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: 16.5 deep-scrub ok 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-11"}]': finished 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2591376703' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-94281-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: osdmap e101: 8 total, 8 up, 8 in 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:31 vm09 ceph-mon[54524]: 16.7 deep-scrub starts 2026-03-09T20:22:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:31 vm09 ceph-mon[54524]: 16.7 deep-scrub ok 2026-03-09T20:22:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:31 vm09 ceph-mon[54524]: pgmap v97: 588 pgs: 160 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 422 active+clean; 144 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:31 vm09 ceph-mon[54524]: 16.8 deep-scrub starts 2026-03-09T20:22:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:31 vm09 ceph-mon[54524]: 16.8 deep-scrub ok 2026-03-09T20:22:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:31 vm09 ceph-mon[54524]: Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:31 vm09 ceph-mon[54524]: from='client.31108 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]': finished 2026-03-09T20:22:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:31 vm09 ceph-mon[54524]: osdmap e102: 8 total, 8 up, 8 in 2026-03-09T20:22:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:31 vm09 ceph-mon[54524]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[51870]: 16.7 deep-scrub starts 2026-03-09T20:22:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[51870]: 16.7 deep-scrub ok 2026-03-09T20:22:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[51870]: pgmap v97: 588 pgs: 160 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 422 active+clean; 144 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[51870]: 16.8 deep-scrub starts 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[51870]: 16.8 deep-scrub ok 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[51870]: Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[51870]: from='client.31108 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]': finished 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[51870]: osdmap e102: 8 total, 8 up, 8 in 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[51870]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[61345]: 16.7 deep-scrub starts 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[61345]: 16.7 deep-scrub ok 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[61345]: pgmap v97: 588 pgs: 160 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 422 active+clean; 144 MiB data, 963 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[61345]: 16.8 deep-scrub starts 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[61345]: 16.8 deep-scrub ok 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[61345]: Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[61345]: from='client.31108 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-94655-10"}]': finished 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[61345]: osdmap e102: 8 total, 8 up, 8 in 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4002135007' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[61345]: from='client.31108 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]: dispatch 2026-03-09T20:22:32.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:33.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: pgmap v100: 420 pgs: 8 active+clean+snaptrim_wait, 8 creating+activating, 23 creating+peering, 6 active+clean+snaptrim, 375 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 2 op/s 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: from='client.31108 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]': finished 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/328804401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: osdmap e103: 8 total, 8 up, 8 in 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: from='client.38513 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[51870]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: pgmap v100: 420 pgs: 8 active+clean+snaptrim_wait, 8 creating+activating, 23 creating+peering, 6 active+clean+snaptrim, 375 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 2 op/s 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: from='client.31108 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]': finished 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/328804401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: osdmap e103: 8 total, 8 up, 8 in 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: from='client.38513 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:33.662 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:33 vm05 ceph-mon[61345]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: pgmap v100: 420 pgs: 8 active+clean+snaptrim_wait, 8 creating+activating, 23 creating+peering, 6 active+clean+snaptrim, 375 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 2 op/s 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: from='client.31108 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-94655-10"}]': finished 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/328804401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: osdmap e103: 8 total, 8 up, 8 in 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: from='client.38513 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:33 vm09 ceph-mon[54524]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: Running main() from gmock_main.cc 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [==========] Running 14 tests from 1 test suite. 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [----------] Global test environment set-up. 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [----------] 14 tests from NeoRadosIo 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.Limits 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.Limits (2928 ms) 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.SimpleWrite 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.SimpleWrite (3920 ms) 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.ReadOp 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.ReadOp (3021 ms) 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.SparseRead 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.SparseRead (2773 ms) 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.RoundTrip 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.RoundTrip (2813 ms) 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.ReadIntoBuufferlist 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.ReadIntoBuufferlist (3478 ms) 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.OverlappingWriteRoundTrip 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.OverlappingWriteRoundTrip (3956 ms) 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.WriteFullRoundTrip 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.WriteFullRoundTrip (3296 ms) 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.AppendRoundTrip 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.AppendRoundTrip (2976 ms) 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.Trunc 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.Trunc (2424 ms) 2026-03-09T20:22:34.502 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.Remove 2026-03-09T20:22:34.503 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.Remove (3068 ms) 2026-03-09T20:22:34.503 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.XattrsRoundTrip 2026-03-09T20:22:34.503 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.XattrsRoundTrip (3110 ms) 2026-03-09T20:22:34.503 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.RmXattr 2026-03-09T20:22:34.503 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.RmXattr (2694 ms) 2026-03-09T20:22:34.503 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.GetXattrs 2026-03-09T20:22:34.503 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.GetXattrs (3484 ms) 2026-03-09T20:22:34.503 INFO:tasks.workunit.client.0.vm05.stdout: io: [----------] 14 tests from NeoRadosIo (43941 ms total) 2026-03-09T20:22:34.503 INFO:tasks.workunit.client.0.vm05.stdout: io: 2026-03-09T20:22:34.503 INFO:tasks.workunit.client.0.vm05.stdout: io: [----------] Global test environment tear-down 2026-03-09T20:22:34.503 INFO:tasks.workunit.client.0.vm05.stdout: io: [==========] 14 tests from 1 test suite ran. (43941 ms total) 2026-03-09T20:22:34.503 INFO:tasks.workunit.client.0.vm05.stdout: io: [ PASSED ] 14 tests. 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: Running main() from gmock_main.cc 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [==========] Running 14 tests from 1 test suite. 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [----------] Global test environment set-up. 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [----------] 14 tests from NeoRadosReadOps 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.SetOpFlags 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.SetOpFlags (2845 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.AssertExists 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.AssertExists (3822 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.AssertVersion 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.AssertVersion (3020 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.CmpXattr 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.CmpXattr (2748 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.Read 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.Read (2842 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.Checksum 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.Checksum (3473 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.RWOrderedRead 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.RWOrderedRead (2976 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.ShortRead 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.ShortRead (2988 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.Exec 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.Exec (3235 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.Stat 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.Stat (2334 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.Omap 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.Omap (3111 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.OmapNuls 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.OmapNuls (3183 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.GetXattrs 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.GetXattrs (3055 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.CmpExt 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.CmpExt (4132 ms) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [----------] 14 tests from NeoRadosReadOps (43764 ms total) 2026-03-09T20:22:34.509 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: 2026-03-09T20:22:34.510 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [----------] Global test environment tear-down 2026-03-09T20:22:34.510 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [==========] 14 tests from 1 test suite ran. (43764 ms total) 2026-03-09T20:22:34.510 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ PASSED ] 14 tests. 2026-03-09T20:22:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:34 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:34 vm09 ceph-mon[54524]: from='client.38513 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:34 vm09 ceph-mon[54524]: from='client.39530 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:34 vm09 ceph-mon[54524]: osdmap e104: 8 total, 8 up, 8 in 2026-03-09T20:22:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:34 vm09 ceph-mon[54524]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1773252715' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-94281-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[51870]: from='client.38513 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[51870]: from='client.39530 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[51870]: osdmap e104: 8 total, 8 up, 8 in 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[51870]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1773252715' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-94281-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[61345]: from='client.38513 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[61345]: from='client.39530 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[61345]: osdmap e104: 8 total, 8 up, 8 in 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[61345]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1773252715' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-94281-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:35.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:35 vm09 ceph-mon[54524]: pgmap v103: 580 pgs: 160 unknown, 8 active+clean+snaptrim_wait, 8 creating+activating, 23 creating+peering, 6 active+clean+snaptrim, 375 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 3 op/s 2026-03-09T20:22:35.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:35 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1773252715' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-94281-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:35.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:35 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:35.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:35 vm09 ceph-mon[54524]: osdmap e105: 8 total, 8 up, 8 in 2026-03-09T20:22:35.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:22:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:22:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:35 vm05 ceph-mon[51870]: pgmap v103: 580 pgs: 160 unknown, 8 active+clean+snaptrim_wait, 8 creating+activating, 23 creating+peering, 6 active+clean+snaptrim, 375 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 3 op/s 2026-03-09T20:22:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:35 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1773252715' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-94281-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:35 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:35 vm05 ceph-mon[51870]: osdmap e105: 8 total, 8 up, 8 in 2026-03-09T20:22:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:35 vm05 ceph-mon[61345]: pgmap v103: 580 pgs: 160 unknown, 8 active+clean+snaptrim_wait, 8 creating+activating, 23 creating+peering, 6 active+clean+snaptrim, 375 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 3 op/s 2026-03-09T20:22:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:35 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1773252715' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-94281-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:35 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:35 vm05 ceph-mon[61345]: osdmap e105: 8 total, 8 up, 8 in 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[51870]: from='client.39530 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]': finished 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[51870]: osdmap e106: 8 total, 8 up, 8 in 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/960306546' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-94310-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[51870]: from='client.41107 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-94310-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[51870]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[61345]: from='client.39530 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]': finished 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[61345]: osdmap e106: 8 total, 8 up, 8 in 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/960306546' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-94310-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[61345]: from='client.41107 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-94310-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[61345]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:37.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:37.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:36 vm09 ceph-mon[54524]: from='client.39530 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-94655-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]': finished 2026-03-09T20:22:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:36 vm09 ceph-mon[54524]: osdmap e106: 8 total, 8 up, 8 in 2026-03-09T20:22:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/960306546' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-94310-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:36 vm09 ceph-mon[54524]: from='client.41107 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-94310-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:36 vm09 ceph-mon[54524]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:37 vm09 ceph-mon[54524]: pgmap v106: 492 pgs: 11 creating+peering, 29 unknown, 452 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 53 KiB/s wr, 55 op/s 2026-03-09T20:22:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:37 vm09 ceph-mon[54524]: from='client.41107 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-94310-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:37 vm09 ceph-mon[54524]: osdmap e107: 8 total, 8 up, 8 in 2026-03-09T20:22:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:37 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2357427648' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm05-94281-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:37 vm09 ceph-mon[54524]: from='client.41144 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm05-94281-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:37 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:38.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:37 vm05 ceph-mon[61345]: pgmap v106: 492 pgs: 11 creating+peering, 29 unknown, 452 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 53 KiB/s wr, 55 op/s 2026-03-09T20:22:38.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:37 vm05 ceph-mon[61345]: from='client.41107 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-94310-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:38.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:37 vm05 ceph-mon[61345]: osdmap e107: 8 total, 8 up, 8 in 2026-03-09T20:22:38.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:37 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2357427648' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm05-94281-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:38.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:37 vm05 ceph-mon[61345]: from='client.41144 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm05-94281-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:38.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:37 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:38.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:37 vm05 ceph-mon[51870]: pgmap v106: 492 pgs: 11 creating+peering, 29 unknown, 452 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 53 KiB/s wr, 55 op/s 2026-03-09T20:22:38.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:37 vm05 ceph-mon[51870]: from='client.41107 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-94310-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:38.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:37 vm05 ceph-mon[51870]: osdmap e107: 8 total, 8 up, 8 in 2026-03-09T20:22:38.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:37 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2357427648' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm05-94281-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:38.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:37 vm05 ceph-mon[51870]: from='client.41144 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm05-94281-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:38.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:37 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:38 vm05 ceph-mon[51870]: from='client.41144 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm05-94281-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:38 vm05 ceph-mon[51870]: osdmap e108: 8 total, 8 up, 8 in 2026-03-09T20:22:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:38 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:22:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:22:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:22:38.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:38 vm05 ceph-mon[61345]: from='client.41144 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm05-94281-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:38.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:38 vm05 ceph-mon[61345]: osdmap e108: 8 total, 8 up, 8 in 2026-03-09T20:22:38.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:38.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:38 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:38.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:39.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:38 vm09 ceph-mon[54524]: from='client.41144 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm05-94281-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:39.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:38 vm09 ceph-mon[54524]: osdmap e108: 8 total, 8 up, 8 in 2026-03-09T20:22:39.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:39.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:38 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:22:39.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:39 vm09 ceph-mon[54524]: pgmap v109: 492 pgs: 4 creating+peering, 68 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 50 KiB/s wr, 51 op/s 2026-03-09T20:22:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:39 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-13"}]: dispatch 2026-03-09T20:22:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:39 vm09 ceph-mon[54524]: osdmap e109: 8 total, 8 up, 8 in 2026-03-09T20:22:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3283819934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-94310-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:39 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-13"}]: dispatch 2026-03-09T20:22:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:39 vm09 ceph-mon[54524]: from='client.43141 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-94310-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[51870]: pgmap v109: 492 pgs: 4 creating+peering, 68 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 50 KiB/s wr, 51 op/s 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-13"}]: dispatch 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[51870]: osdmap e109: 8 total, 8 up, 8 in 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3283819934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-94310-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-13"}]: dispatch 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[51870]: from='client.43141 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-94310-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[61345]: pgmap v109: 492 pgs: 4 creating+peering, 68 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 50 KiB/s wr, 51 op/s 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-13"}]: dispatch 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[61345]: osdmap e109: 8 total, 8 up, 8 in 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3283819934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-94310-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-13"}]: dispatch 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[61345]: from='client.43141 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-94310-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:40.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-13"}]': finished 2026-03-09T20:22:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:40 vm09 ceph-mon[54524]: from='client.43141 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-94310-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-13", "mode": "writeback"}]: dispatch 2026-03-09T20:22:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:40 vm09 ceph-mon[54524]: osdmap e110: 8 total, 8 up, 8 in 2026-03-09T20:22:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3466245455' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-94281-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-13", "mode": "writeback"}]: dispatch 2026-03-09T20:22:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:40 vm09 ceph-mon[54524]: from='client.43043 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-94281-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:41.163 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-13"}]': finished 2026-03-09T20:22:41.163 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[51870]: from='client.43141 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-94310-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-13", "mode": "writeback"}]: dispatch 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[51870]: osdmap e110: 8 total, 8 up, 8 in 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3466245455' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-94281-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-13", "mode": "writeback"}]: dispatch 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[51870]: from='client.43043 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-94281-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-13"}]': finished 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[61345]: from='client.43141 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-94310-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-13", "mode": "writeback"}]: dispatch 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[61345]: osdmap e110: 8 total, 8 up, 8 in 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3466245455' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-94281-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-13", "mode": "writeback"}]: dispatch 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[61345]: from='client.43043 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-94281-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:41.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[51870]: pgmap v112: 524 pgs: 4 creating+peering, 100 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-13", "mode": "writeback"}]': finished 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[51870]: from='client.43043 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-94281-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[51870]: osdmap e111: 8 total, 8 up, 8 in 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[51870]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[51870]: osdmap e112: 8 total, 8 up, 8 in 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[61345]: pgmap v112: 524 pgs: 4 creating+peering, 100 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-13", "mode": "writeback"}]': finished 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[61345]: from='client.43043 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-94281-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[61345]: osdmap e111: 8 total, 8 up, 8 in 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[61345]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[61345]: osdmap e112: 8 total, 8 up, 8 in 2026-03-09T20:22:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:41 vm09 ceph-mon[54524]: pgmap v112: 524 pgs: 4 creating+peering, 100 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:41 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:22:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:41 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-13", "mode": "writeback"}]': finished 2026-03-09T20:22:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:41 vm09 ceph-mon[54524]: from='client.43043 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-94281-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:41 vm09 ceph-mon[54524]: osdmap e111: 8 total, 8 up, 8 in 2026-03-09T20:22:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:41 vm09 ceph-mon[54524]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:41 vm09 ceph-mon[54524]: osdmap e112: 8 total, 8 up, 8 in 2026-03-09T20:22:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.0"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.0"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.1"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.1"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.2"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.2"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.3"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.3"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.4"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.4"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.5"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.5"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.6"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.6"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.7"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.7"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.8"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.8"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.9"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.9"}]: dispatch 2026-03-09T20:22:43.165 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: 160.6 scrub starts 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: 160.6 scrub ok 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: 160.0 scrub starts 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: 160.0 scrub ok 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: 160.1 scrub starts 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: 160.1 scrub ok 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: 160.5 scrub starts 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: 160.5 scrub ok 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: osdmap e113: 8 total, 8 up, 8 in 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1036908684' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-94310-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1478688144' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm05-94281-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.44630 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-94310-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.0"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.0"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.1"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.1"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.2"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.2"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.3"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.3"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.4"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.4"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.5"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.5"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.6"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.6"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.7"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.7"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.8"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.8"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.9"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.9"}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: 160.6 scrub starts 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: 160.6 scrub ok 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: 160.0 scrub starts 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: 160.0 scrub ok 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: 160.1 scrub starts 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: 160.1 scrub ok 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: 160.5 scrub starts 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: 160.5 scrub ok 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: osdmap e113: 8 total, 8 up, 8 in 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1036908684' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-94310-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1478688144' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm05-94281-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.44630 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-94310-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:43.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.0"}]: dispatch 2026-03-09T20:22:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.0"}]: dispatch 2026-03-09T20:22:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.1"}]: dispatch 2026-03-09T20:22:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.1"}]: dispatch 2026-03-09T20:22:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.2"}]: dispatch 2026-03-09T20:22:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.2"}]: dispatch 2026-03-09T20:22:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.3"}]: dispatch 2026-03-09T20:22:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.3"}]: dispatch 2026-03-09T20:22:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.4"}]: dispatch 2026-03-09T20:22:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.4"}]: dispatch 2026-03-09T20:22:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.5"}]: dispatch 2026-03-09T20:22:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.5"}]: dispatch 2026-03-09T20:22:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.6"}]: dispatch 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.6"}]: dispatch 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.7"}]: dispatch 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.7"}]: dispatch 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.8"}]: dispatch 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.8"}]: dispatch 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "160.9"}]: dispatch 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "160.9"}]: dispatch 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: 160.6 scrub starts 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: 160.6 scrub ok 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: 160.0 scrub starts 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: 160.0 scrub ok 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: 160.1 scrub starts 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: 160.1 scrub ok 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: 160.5 scrub starts 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: 160.5 scrub ok 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: osdmap e113: 8 total, 8 up, 8 in 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1036908684' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-94310-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1478688144' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm05-94281-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.44630 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-94310-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:44.163 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: 160.8 scrub starts 2026-03-09T20:22:44.163 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: 160.8 scrub ok 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: pgmap v116: 492 pgs: 64 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 91 KiB/s wr, 88 op/s 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: 160.4 deep-scrub starts 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: 160.4 deep-scrub ok 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: 160.7 scrub starts 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: 160.7 scrub ok 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: 160.2 scrub starts 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: 160.2 scrub ok 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: 160.9 scrub starts 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: 160.9 scrub ok 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: 160.3 scrub starts 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: 160.3 scrub ok 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1478688144' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm05-94281-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: from='client.44630 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-94310-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: osdmap e114: 8 total, 8 up, 8 in 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: 160.8 scrub starts 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: 160.8 scrub ok 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: pgmap v116: 492 pgs: 64 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 91 KiB/s wr, 88 op/s 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: 160.4 deep-scrub starts 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: 160.4 deep-scrub ok 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: 160.7 scrub starts 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: 160.7 scrub ok 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: 160.2 scrub starts 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: 160.2 scrub ok 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: 160.9 scrub starts 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: 160.9 scrub ok 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: 160.3 scrub starts 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: 160.3 scrub ok 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1478688144' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm05-94281-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: from='client.44630 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-94310-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: osdmap e114: 8 total, 8 up, 8 in 2026-03-09T20:22:44.164 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:43 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: 160.8 scrub starts 2026-03-09T20:22:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: 160.8 scrub ok 2026-03-09T20:22:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: pgmap v116: 492 pgs: 64 unknown, 428 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 91 KiB/s wr, 88 op/s 2026-03-09T20:22:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: 160.4 deep-scrub starts 2026-03-09T20:22:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: 160.4 deep-scrub ok 2026-03-09T20:22:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: 160.7 scrub starts 2026-03-09T20:22:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: 160.7 scrub ok 2026-03-09T20:22:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: 160.2 scrub starts 2026-03-09T20:22:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: 160.2 scrub ok 2026-03-09T20:22:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: 160.9 scrub starts 2026-03-09T20:22:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: 160.9 scrub ok 2026-03-09T20:22:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: 160.3 scrub starts 2026-03-09T20:22:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: 160.3 scrub ok 2026-03-09T20:22:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1478688144' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm05-94281-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: from='client.44630 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-94310-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: osdmap e114: 8 total, 8 up, 8 in 2026-03-09T20:22:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:43 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:45.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:45 vm05 ceph-mon[51870]: osdmap e115: 8 total, 8 up, 8 in 2026-03-09T20:22:45.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:45.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:45 vm05 ceph-mon[51870]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:45.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:45 vm05 ceph-mon[51870]: pgmap v119: 452 pgs: 32 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:45.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:45.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:45.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:45 vm05 ceph-mon[61345]: osdmap e115: 8 total, 8 up, 8 in 2026-03-09T20:22:45.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:45.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:45 vm05 ceph-mon[61345]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:45.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:45 vm05 ceph-mon[61345]: pgmap v119: 452 pgs: 32 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:45.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:45.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:45.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:45 vm09 ceph-mon[54524]: osdmap e115: 8 total, 8 up, 8 in 2026-03-09T20:22:45.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:45.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:45 vm09 ceph-mon[54524]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:45.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:45 vm09 ceph-mon[54524]: pgmap v119: 452 pgs: 32 unknown, 420 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:45.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:45.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:45.522 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:22:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: Running main() from gmock_main.cc 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [==========] Running 13 tests from 4 test suites. 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] Global test environment set-up. 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshots 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapList 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapList (2098 ms) 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapRemove 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapRemove (1951 ms) 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.Rollback 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshots.Rollback (2848 ms) 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapGetName 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapGetName (2022 ms) 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshots (8919 ms total) 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 3 tests from LibRadosSnapshotsSelfManaged 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.Snap 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.Snap (4181 ms) 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.Rollback 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.Rollback (4362 ms) 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.FutureSnapRollback 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.FutureSnapRollback (5103 ms) 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 3 tests from LibRadosSnapshotsSelfManaged (13646 ms total) 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshotsEC 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapList 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapList (2405 ms) 2026-03-09T20:22:46.177 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapRemove 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapRemove (2105 ms) 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.Rollback 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.Rollback (2109 ms) 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapGetName 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapGetName (2032 ms) 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshotsEC (8651 ms total) 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 2 tests from LibRadosSnapshotsSelfManagedEC 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManagedEC.Snap 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManagedEC.Snap (4354 ms) 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManagedEC.Rollback 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManagedEC.Rollback (3341 ms) 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 2 tests from LibRadosSnapshotsSelfManagedEC (7695 ms total) 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] Global test environment tear-down 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [==========] 13 tests from 4 test suites ran. (56336 ms total) 2026-03-09T20:22:46.178 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ PASSED ] 13 tests. 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[51870]: from='client.39530 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]': finished 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[51870]: osdmap e116: 8 total, 8 up, 8 in 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3817808212' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-94281-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[51870]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3277514827' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm05-94310-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[51870]: from='client.46571 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-94281-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[61345]: from='client.39530 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]': finished 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[61345]: osdmap e116: 8 total, 8 up, 8 in 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3817808212' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-94281-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[61345]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3277514827' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm05-94310-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[61345]: from='client.46571 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-94281-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:46 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:46 vm09 ceph-mon[54524]: from='client.39530 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]': finished 2026-03-09T20:22:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4219644198' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:46 vm09 ceph-mon[54524]: osdmap e116: 8 total, 8 up, 8 in 2026-03-09T20:22:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3817808212' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-94281-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:46 vm09 ceph-mon[54524]: from='client.39530 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]: dispatch 2026-03-09T20:22:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3277514827' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm05-94310-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:46 vm09 ceph-mon[54524]: from='client.46571 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-94281-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:46.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:46 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:47 vm09 ceph-mon[54524]: from='client.39530 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]': finished 2026-03-09T20:22:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3277514827' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm05-94310-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:47 vm09 ceph-mon[54524]: from='client.46571 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-94281-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:47 vm09 ceph-mon[54524]: osdmap e117: 8 total, 8 up, 8 in 2026-03-09T20:22:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:47 vm09 ceph-mon[54524]: pgmap v122: 452 pgs: 64 creating+peering, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-09T20:22:47.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:47 vm05 ceph-mon[51870]: from='client.39530 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]': finished 2026-03-09T20:22:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3277514827' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm05-94310-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:47 vm05 ceph-mon[51870]: from='client.46571 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-94281-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:47 vm05 ceph-mon[51870]: osdmap e117: 8 total, 8 up, 8 in 2026-03-09T20:22:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:47 vm05 ceph-mon[51870]: pgmap v122: 452 pgs: 64 creating+peering, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-09T20:22:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:47 vm05 ceph-mon[61345]: from='client.39530 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-94655-15"}]': finished 2026-03-09T20:22:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3277514827' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm05-94310-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:47 vm05 ceph-mon[61345]: from='client.46571 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-94281-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:47 vm05 ceph-mon[61345]: osdmap e117: 8 total, 8 up, 8 in 2026-03-09T20:22:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:47 vm05 ceph-mon[61345]: pgmap v122: 452 pgs: 64 creating+peering, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-09T20:22:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:48.564 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:48 vm05 ceph-mon[51870]: osdmap e118: 8 total, 8 up, 8 in 2026-03-09T20:22:48.564 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:48 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:48.564 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:48 vm05 ceph-mon[61345]: osdmap e118: 8 total, 8 up, 8 in 2026-03-09T20:22:48.564 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:48 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:48.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:48 vm09 ceph-mon[54524]: osdmap e118: 8 total, 8 up, 8 in 2026-03-09T20:22:48.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:48 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:48.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:22:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:22:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [==========] Running 12 tests from 4 test suites. 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] Global test environment set-up. 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 1 test from LibRadosMiscVersion 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMiscVersion.Version 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMiscVersion.Version (0 ms) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 1 test from LibRadosMiscVersion (0 ms total) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 2 tests from LibRadosMiscConnectFailure 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMiscConnectFailure.ConnectFailure 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: unable to get monitor info from DNS SRV with service name: ceph-mon 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: 2026-03-09T20:21:49.814+0000 7f17d6439880 -1 failed for service _ceph-mon._tcp 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: 2026-03-09T20:21:49.814+0000 7f17d6439880 -1 monclient: get_monmap_and_config cannot identify monitors to contact 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMiscConnectFailure.ConnectFailure (49 ms) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMiscConnectFailure.ConnectTimeout 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMiscConnectFailure.ConnectTimeout (5010 ms) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 2 tests from LibRadosMiscConnectFailure (5059 ms total) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 1 test from LibRadosMiscPool 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMiscPool.PoolCreationRace 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: started 0x7f17b4069320 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: started 0x55ab93cf77d0 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: started 2 aios 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: waiting 0x7f17b4069320 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: waiting 0x55ab93cf77d0 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: done. 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMiscPool.PoolCreationRace (6186 ms) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 1 test from LibRadosMiscPool (6186 ms total) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 8 tests from LibRadosMisc 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.ClusterFSID 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.ClusterFSID (0 ms) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.Exec 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.Exec (221 ms) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.WriteSame 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.WriteSame (7 ms) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.CmpExt 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.CmpExt (5 ms) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.Applications 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.Applications (4794 ms) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.MinCompatOSD 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.MinCompatOSD (0 ms) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.MinCompatClient 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.MinCompatClient (0 ms) 2026-03-09T20:22:49.430 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.ShutdownRace 2026-03-09T20:22:49.431 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.ShutdownRace (40342 ms) 2026-03-09T20:22:49.431 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 8 tests from LibRadosMisc (45369 ms total) 2026-03-09T20:22:49.431 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: 2026-03-09T20:22:49.431 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] Global test environment tear-down 2026-03-09T20:22:49.431 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [==========] 12 tests from 4 test suites ran. (59631 ms total) 2026-03-09T20:22:49.431 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ PASSED ] 12 tests. 2026-03-09T20:22:49.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[61345]: pgmap v124: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 71 op/s 2026-03-09T20:22:49.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[61345]: osdmap e119: 8 total, 8 up, 8 in 2026-03-09T20:22:49.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1942501975' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-94281-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/621955842' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-94310-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[61345]: from='client.49354 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-94281-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/477764475' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94758-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[61345]: from='client.48587 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-94310-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:49.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[51870]: pgmap v124: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 71 op/s 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[51870]: osdmap e119: 8 total, 8 up, 8 in 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1942501975' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-94281-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/621955842' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-94310-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[51870]: from='client.49354 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-94281-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/477764475' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94758-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[51870]: from='client.48587 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-94310-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:49 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:49.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:49 vm09 ceph-mon[54524]: pgmap v124: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 71 op/s 2026-03-09T20:22:49.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:49 vm09 ceph-mon[54524]: osdmap e119: 8 total, 8 up, 8 in 2026-03-09T20:22:49.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1942501975' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-94281-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/621955842' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-94310-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:49 vm09 ceph-mon[54524]: from='client.49354 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-94281-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/477764475' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94758-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:49 vm09 ceph-mon[54524]: from='client.48587 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-94310-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:49.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:49.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:49 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:49.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:49 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:49.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:49 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:49.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:49 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:49.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:49 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:22:50.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:50 vm09 ceph-mon[54524]: from='client.49354 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-94281-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:50.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/477764475' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94758-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:50.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:50 vm09 ceph-mon[54524]: from='client.48587 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-94310-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:50.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:50 vm09 ceph-mon[54524]: osdmap e120: 8 total, 8 up, 8 in 2026-03-09T20:22:50.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:50.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:50 vm05 ceph-mon[61345]: from='client.49354 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-94281-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/477764475' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94758-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:50 vm05 ceph-mon[61345]: from='client.48587 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-94310-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:50 vm05 ceph-mon[61345]: osdmap e120: 8 total, 8 up, 8 in 2026-03-09T20:22:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:50 vm05 ceph-mon[51870]: from='client.49354 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-94281-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/477764475' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94758-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:50 vm05 ceph-mon[51870]: from='client.48587 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-94310-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:50 vm05 ceph-mon[51870]: osdmap e120: 8 total, 8 up, 8 in 2026-03-09T20:22:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:51.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:51 vm09 ceph-mon[54524]: pgmap v127: 484 pgs: 128 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:51 vm09 ceph-mon[54524]: osdmap e121: 8 total, 8 up, 8 in 2026-03-09T20:22:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:51.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:51 vm05 ceph-mon[61345]: pgmap v127: 484 pgs: 128 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:51.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:51 vm05 ceph-mon[61345]: osdmap e121: 8 total, 8 up, 8 in 2026-03-09T20:22:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:51 vm05 ceph-mon[51870]: pgmap v127: 484 pgs: 128 unknown, 356 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:22:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:51 vm05 ceph-mon[51870]: osdmap e121: 8 total, 8 up, 8 in 2026-03-09T20:22:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:52.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:52 vm09 ceph-mon[54524]: osdmap e122: 8 total, 8 up, 8 in 2026-03-09T20:22:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3165701134' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-94310-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1230408021' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94281-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:52 vm09 ceph-mon[54524]: from='client.49247 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-94310-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:52 vm09 ceph-mon[54524]: from='client.49253 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94281-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:52.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:52 vm05 ceph-mon[61345]: osdmap e122: 8 total, 8 up, 8 in 2026-03-09T20:22:52.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3165701134' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-94310-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:52.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1230408021' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94281-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:52 vm05 ceph-mon[61345]: from='client.49247 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-94310-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:52 vm05 ceph-mon[61345]: from='client.49253 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94281-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:52 vm05 ceph-mon[51870]: osdmap e122: 8 total, 8 up, 8 in 2026-03-09T20:22:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3165701134' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-94310-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1230408021' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94281-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:52 vm05 ceph-mon[51870]: from='client.49247 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-94310-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:52 vm05 ceph-mon[51870]: from='client.49253 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94281-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:53.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[61345]: pgmap v130: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[61345]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[61345]: from='client.49247 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-94310-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[61345]: from='client.49253 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94281-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[61345]: osdmap e123: 8 total, 8 up, 8 in 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[61345]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[61345]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[61345]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[51870]: pgmap v130: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[51870]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[51870]: from='client.49247 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-94310-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[51870]: from='client.49253 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94281-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[51870]: osdmap e123: 8 total, 8 up, 8 in 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[51870]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[51870]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[51870]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:54.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:53 vm09 ceph-mon[54524]: pgmap v130: 484 pgs: 64 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:22:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:53 vm09 ceph-mon[54524]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:22:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:53 vm09 ceph-mon[54524]: from='client.49247 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-94310-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:53 vm09 ceph-mon[54524]: from='client.49253 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94281-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:53 vm09 ceph-mon[54524]: osdmap e123: 8 total, 8 up, 8 in 2026-03-09T20:22:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:53 vm09 ceph-mon[54524]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:53 vm09 ceph-mon[54524]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:53 vm09 ceph-mon[54524]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:22:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:54.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[61345]: from='client.49265 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:54.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-94758-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:54.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[61345]: osdmap e124: 8 total, 8 up, 8 in 2026-03-09T20:22:54.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[61345]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-94758-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:54.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:54.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[61345]: osdmap e125: 8 total, 8 up, 8 in 2026-03-09T20:22:54.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4264971425' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-94310-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:54.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3162896271' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-94281-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[61345]: from='client.49939 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-94281-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[51870]: from='client.49265 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-94758-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[51870]: osdmap e124: 8 total, 8 up, 8 in 2026-03-09T20:22:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[51870]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-94758-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[51870]: osdmap e125: 8 total, 8 up, 8 in 2026-03-09T20:22:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4264971425' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-94310-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3162896271' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-94281-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:54 vm05 ceph-mon[51870]: from='client.49939 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-94281-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:55.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:54 vm09 ceph-mon[54524]: from='client.49265 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:22:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-94758-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:54 vm09 ceph-mon[54524]: osdmap e124: 8 total, 8 up, 8 in 2026-03-09T20:22:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:54 vm09 ceph-mon[54524]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-94758-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:22:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:54 vm09 ceph-mon[54524]: osdmap e125: 8 total, 8 up, 8 in 2026-03-09T20:22:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4264971425' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-94310-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3162896271' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-94281-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:54 vm09 ceph-mon[54524]: from='client.49939 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-94281-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:55.588 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:22:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:22:55.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[61345]: pgmap v133: 388 pgs: 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:22:55.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:55.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[61345]: from='client.24680 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[61345]: from='client.49265 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-94758-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]': finished 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4264971425' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-94310-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[61345]: from='client.49939 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-94281-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[61345]: from='client.24680 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]': finished 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[61345]: osdmap e126: 8 total, 8 up, 8 in 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[61345]: from='client.24680 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[51870]: pgmap v133: 388 pgs: 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[51870]: from='client.24680 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[51870]: from='client.49265 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-94758-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]': finished 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4264971425' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-94310-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[51870]: from='client.49939 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-94281-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[51870]: from='client.24680 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]': finished 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[51870]: osdmap e126: 8 total, 8 up, 8 in 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T20:22:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:55 vm05 ceph-mon[51870]: from='client.24680 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T20:22:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:55 vm09 ceph-mon[54524]: pgmap v133: 388 pgs: 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:22:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T20:22:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:55 vm09 ceph-mon[54524]: from='client.24680 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T20:22:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:55 vm09 ceph-mon[54524]: from='client.49265 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-94758-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]': finished 2026-03-09T20:22:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4264971425' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-94310-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:55 vm09 ceph-mon[54524]: from='client.49939 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-94281-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:55 vm09 ceph-mon[54524]: from='client.24680 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]': finished 2026-03-09T20:22:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:55 vm09 ceph-mon[54524]: osdmap e126: 8 total, 8 up, 8 in 2026-03-09T20:22:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T20:22:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:55 vm09 ceph-mon[54524]: from='client.24680 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T20:22:56.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:56.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:56 vm05 ceph-mon[61345]: from='client.24680 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T20:22:56.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:56 vm05 ceph-mon[61345]: osdmap e127: 8 total, 8 up, 8 in 2026-03-09T20:22:56.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:56.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:56.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:56 vm05 ceph-mon[51870]: from='client.24680 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T20:22:56.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:56 vm05 ceph-mon[51870]: osdmap e127: 8 total, 8 up, 8 in 2026-03-09T20:22:56.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:57.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:22:57.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:56 vm09 ceph-mon[54524]: from='client.24680 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T20:22:57.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:56 vm09 ceph-mon[54524]: osdmap e127: 8 total, 8 up, 8 in 2026-03-09T20:22:57.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:58.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:57 vm09 ceph-mon[54524]: pgmap v136: 460 pgs: 8 creating+activating, 57 creating+peering, 7 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:22:58.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:58.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:57 vm05 ceph-mon[61345]: pgmap v136: 460 pgs: 8 creating+activating, 57 creating+peering, 7 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:22:58.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:58.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:57 vm05 ceph-mon[51870]: pgmap v136: 460 pgs: 8 creating+activating, 57 creating+peering, 7 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:22:58.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:58.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:58.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[61345]: osdmap e128: 8 total, 8 up, 8 in 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/283559759' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1874500984' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-94281-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[61345]: from='client.49948 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[61345]: from='client.49945 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-94281-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[61345]: from='client.49948 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[61345]: from='client.49945 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-94281-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[61345]: osdmap e129: 8 total, 8 up, 8 in 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[51870]: osdmap e128: 8 total, 8 up, 8 in 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/283559759' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1874500984' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-94281-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[51870]: from='client.49948 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[51870]: from='client.49945 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-94281-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[51870]: from='client.49948 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[51870]: from='client.49945 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-94281-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:58 vm05 ceph-mon[51870]: osdmap e129: 8 total, 8 up, 8 in 2026-03-09T20:22:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:22:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:22:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:22:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:58 vm09 ceph-mon[54524]: osdmap e128: 8 total, 8 up, 8 in 2026-03-09T20:22:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/283559759' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1874500984' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-94281-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:58 vm09 ceph-mon[54524]: from='client.49948 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:58 vm09 ceph-mon[54524]: from='client.49945 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-94281-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-09T20:22:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:22:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:22:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:58 vm09 ceph-mon[54524]: from='client.49948 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:58 vm09 ceph-mon[54524]: from='client.49945 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-94281-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-09T20:22:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:58 vm09 ceph-mon[54524]: osdmap e129: 8 total, 8 up, 8 in 2026-03-09T20:23:00.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:59 vm09 ceph-mon[54524]: pgmap v139: 460 pgs: 1 creating+peering, 71 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:00.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:59 vm09 ceph-mon[54524]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:00.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app2"}]: dispatch 2026-03-09T20:23:00.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:00.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:59 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:00.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:59 vm09 ceph-mon[54524]: osdmap e130: 8 total, 8 up, 8 in 2026-03-09T20:23:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T20:23:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:22:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T20:23:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[61345]: pgmap v139: 460 pgs: 1 creating+peering, 71 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[61345]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app2"}]: dispatch 2026-03-09T20:23:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[61345]: osdmap e130: 8 total, 8 up, 8 in 2026-03-09T20:23:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T20:23:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T20:23:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[51870]: pgmap v139: 460 pgs: 1 creating+peering, 71 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[51870]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app2"}]: dispatch 2026-03-09T20:23:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-94592-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[51870]: osdmap e130: 8 total, 8 up, 8 in 2026-03-09T20:23:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-09T20:23:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:22:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-09T20:23:01.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:01.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T20:23:01.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:00 vm09 ceph-mon[54524]: osdmap e131: 8 total, 8 up, 8 in 2026-03-09T20:23:01.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T20:23:01.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2704884446' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-94310-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:00 vm09 ceph-mon[54524]: from='client.49954 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-94310-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3934108503' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-94281-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:00 vm09 ceph-mon[54524]: from='client.49960 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-94281-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[61345]: osdmap e131: 8 total, 8 up, 8 in 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2704884446' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-94310-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[61345]: from='client.49954 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-94310-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3934108503' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-94281-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[61345]: from='client.49960 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-94281-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[51870]: osdmap e131: 8 total, 8 up, 8 in 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2704884446' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-94310-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[51870]: from='client.49954 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-94310-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:01.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3934108503' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-94281-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:00 vm05 ceph-mon[51870]: from='client.49960 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-94281-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: Running main() from gmock_main.cc 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [==========] Running 21 tests from 5 test suites. 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] Global test environment set-up. 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 5 tests from LibRadosSnapshotsPP 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: seed 94758 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapListPP 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapListPP (2078 ms) 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapRemovePP 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapRemovePP (1982 ms) 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.RollbackPP 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.RollbackPP (2837 ms) 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapGetNamePP 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapGetNamePP (2022 ms) 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapCreateRemovePP 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapCreateRemovePP (3819 ms) 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 5 tests from LibRadosSnapshotsPP (12738 ms total) 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: 2026-03-09T20:23:02.009 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 7 tests from LibRadosSnapshotsSelfManagedPP 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.SnapPP 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.SnapPP (4512 ms) 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.RollbackPP 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.RollbackPP (3777 ms) 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.SnapOverlapPP 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.SnapOverlapPP (5560 ms) 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.Bug11677 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.Bug11677 (4194 ms) 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.OrderSnap 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.OrderSnap (2289 ms) 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.WriteRollback 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: /ceph/rpmbuild/BUILD/ceph-19.2.3-678-ge911bdeb/src/test/librados/snapshots_cxx.cc:460: Skipped 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ SKIPPED ] LibRadosSnapshotsSelfManagedPP.WriteRollback (0 ms) 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.ReusePurgedSnap 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: deleting snap 14 in pool LibRadosSnapshotsSelfManagedPP_vm05-94758-7 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: waiting for snaps to purge 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.ReusePurgedSnap (17825 ms) 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 7 tests from LibRadosSnapshotsSelfManagedPP (38157 ms total) 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 2 tests from LibRadosPoolIsInSelfmanagedSnapsMode 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosPoolIsInSelfmanagedSnapsMode.NotConnected 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosPoolIsInSelfmanagedSnapsMode.NotConnected (3 ms) 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosPoolIsInSelfmanagedSnapsMode.FreshInstance 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosPoolIsInSelfmanagedSnapsMode.FreshInstance (6361 ms) 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 2 tests from LibRadosPoolIsInSelfmanagedSnapsMode (6365 ms total) 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 4 tests from LibRadosSnapshotsECPP 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapListPP 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapListPP (3149 ms) 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapRemovePP 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapRemovePP (1993 ms) 2026-03-09T20:23:02.010 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.RollbackPP 2026-03-09T20:23:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:01 vm09 ceph-mon[54524]: pgmap v142: 396 pgs: 1 creating+peering, 7 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T20:23:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:01 vm09 ceph-mon[54524]: from='client.49954 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-94310-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:01 vm09 ceph-mon[54524]: from='client.49960 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-94281-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:01 vm09 ceph-mon[54524]: osdmap e132: 8 total, 8 up, 8 in 2026-03-09T20:23:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T20:23:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[61345]: pgmap v142: 396 pgs: 1 creating+peering, 7 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[61345]: from='client.49954 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-94310-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[61345]: from='client.49960 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-94281-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[61345]: osdmap e132: 8 total, 8 up, 8 in 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[51870]: pgmap v142: 396 pgs: 1 creating+peering, 7 unknown, 388 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[51870]: from='client.49954 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-94310-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[51870]: from='client.49960 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-94281-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[51870]: osdmap e132: 8 total, 8 up, 8 in 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key1"}]: dispatch 2026-03-09T20:23:02.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.Rollb api_misc_pp: [==========] Running 31 tests from 7 test suites. 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] Global test environment set-up. 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscVersion 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscVersion.VersionPP 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscVersion.VersionPP (0 ms) 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscVersion (0 ms total) 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 22 tests from LibRadosMiscPP 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: seed 94592 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.WaitOSDMapPP 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.WaitOSDMapPP (17 ms) 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongNamePP 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongNamePP (552 ms) 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongLocatorPP 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongLocatorPP (27 ms) 2026-03-09T20:23:03.145 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongNSpacePP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongNSpacePP (14 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongAttrNamePP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongAttrNamePP (15 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.ExecPP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.ExecPP (5 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BadFlagsPP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BadFlagsPP (4 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Operate1PP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Operate1PP (13 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Operate2PP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Operate2PP (4 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BigObjectPP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BigObjectPP (17 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AioOperatePP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AioOperatePP (3 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AssertExistsPP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AssertExistsPP (7 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AssertVersionPP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AssertVersionPP (15 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BigAttrPP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: osd_max_attr_size = 0 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: osd_max_attr_size == 0; skipping test 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BigAttrPP (3200 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CopyPP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CopyPP (943 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CopyScrubPP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: waiting for initial deep scrubs... 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: done waiting, doing copies 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: waiting for final deep scrubs... 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: done waiting 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CopyScrubPP (61875 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.WriteSamePP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.WriteSamePP (4 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CmpExtPP 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CmpExtPP (2 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Applications 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Applications (3800 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.MinCompatOSD 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.MinCompatOSD (0 ms) 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.MinCompatClient 2026-03-09T20:23:03.146 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.MinCompatClient (0 ms) 2026-03-09T20:23:03.147 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Conf 2026-03-09T20:23:03.147 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Conf (0 ms) 2026-03-09T20:23:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key1"}]': finished 2026-03-09T20:23:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:03 vm05 ceph-mon[61345]: osdmap e133: 8 total, 8 up, 8 in 2026-03-09T20:23:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:03 vm05 ceph-mon[61345]: pgmap v146: 396 pgs: 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 6.3 KiB/s wr, 12 op/s 2026-03-09T20:23:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:03 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-94413-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T20:23:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:03 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-94413-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T20:23:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:03.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key1"}]': finished 2026-03-09T20:23:03.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:03 vm05 ceph-mon[51870]: osdmap e133: 8 total, 8 up, 8 in 2026-03-09T20:23:03.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:03 vm05 ceph-mon[51870]: pgmap v146: 396 pgs: 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 6.3 KiB/s wr, 12 op/s 2026-03-09T20:23:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:03 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-94413-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T20:23:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:03 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-94413-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T20:23:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/637910135' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-94592-1","app":"app1","key":"key1"}]': finished 2026-03-09T20:23:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:03 vm09 ceph-mon[54524]: osdmap e133: 8 total, 8 up, 8 in 2026-03-09T20:23:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:03 vm09 ceph-mon[54524]: pgmap v146: 396 pgs: 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 6.3 KiB/s wr, 12 op/s 2026-03-09T20:23:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:03 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-94413-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T20:23:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:03 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-94413-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-09T20:23:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:04.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-94413-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T20:23:04.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: osdmap e134: 8 total, 8 up, 8 in 2026-03-09T20:23:04.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/718871320' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-94281-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:04.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1889465421' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-94310-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='client.49966 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-94281-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='client.49972 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-94310-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94592-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94592-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[61345]: from='client.24680 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-94413-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: osdmap e134: 8 total, 8 up, 8 in 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/718871320' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-94281-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1889465421' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-94310-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='client.49966 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-94281-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='client.49972 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-94310-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94592-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94592-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T20:23:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:04 vm05 ceph-mon[51870]: from='client.24680 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T20:23:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-94413-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-09T20:23:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: osdmap e134: 8 total, 8 up, 8 in 2026-03-09T20:23:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/718871320' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-94281-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1889465421' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-94310-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='client.49966 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-94281-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='client.49972 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-94310-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94592-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94592-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T20:23:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:04 vm09 ceph-mon[54524]: from='client.24680 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]: dispatch 2026-03-09T20:23:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: from='client.49966 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-94281-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: from='client.49972 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-94310-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94592-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: from='client.24680 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]': finished 2026-03-09T20:23:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94592-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: osdmap e135: 8 total, 8 up, 8 in 2026-03-09T20:23:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T20:23:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94592-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: from='client.24680 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T20:23:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: pgmap v149: 428 pgs: 64 unknown, 364 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: from='client.24680 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T20:23:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:05 vm09 ceph-mon[54524]: osdmap e136: 8 total, 8 up, 8 in 2026-03-09T20:23:05.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:23:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:23:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: from='client.49966 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-94281-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: from='client.49972 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-94310-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94592-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: from='client.24680 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]': finished 2026-03-09T20:23:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94592-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:05.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: osdmap e135: 8 total, 8 up, 8 in 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94592-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: from='client.24680 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: pgmap v149: 428 pgs: 64 unknown, 364 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: from='client.24680 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[61345]: osdmap e136: 8 total, 8 up, 8 in 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: from='client.49966 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-94281-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: from='client.49972 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-94310-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94592-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: from='client.24680 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pg_num","val":"11"}]': finished 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94592-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: osdmap e135: 8 total, 8 up, 8 in 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94592-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: from='client.24680 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: pgmap v149: 428 pgs: 64 unknown, 364 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: from='client.24680 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-94413-1","var":"pgp_num","val":"11"}]': finished 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3911759105' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-09T20:23:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:05 vm05 ceph-mon[51870]: osdmap e136: 8 total, 8 up, 8 in 2026-03-09T20:23:06.125 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 22 tests from LibRadosMis snapshots: Running main() from gmock_main.cc 2026-03-09T20:23:06.125 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [==========] Running 11 tests from 2 test suites. 2026-03-09T20:23:06.125 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [----------] Global test environment set-up. 2026-03-09T20:23:06.125 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [----------] 5 tests from NeoRadosSnapshots 2026-03-09T20:23:06.125 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapList 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapList (4776 ms) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapRemove 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapRemove (5506 ms) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSnapshots.Rollback 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSnapshots.Rollback (4227 ms) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapGetName 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapGetName (5094 ms) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapCreateRemove 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapCreateRemove (7353 ms) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [----------] 5 tests from NeoRadosSnapshots (26956 ms total) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [----------] 6 tests from NeoRadosSelfManagedSnaps 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Snap 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Snap (4475 ms) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Rollback 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Rollback (6163 ms) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.SnapOverlap 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.SnapOverlap (8302 ms) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Bug11677 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Bug11677 (5456 ms) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.OrderSnap 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.OrderSnap (4095 ms) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.ReusePurgedSnap 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: Deleting snap 3 in pool ReusePurgedSnapvm05-95655-11. 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: Waiting for snaps to purge. 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.ReusePurgedSnap (19960 ms) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [----------] 6 tests from NeoRadosSelfManagedSnaps (48452 ms total) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [----------] Global test environment tear-down 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [==========] 11 tests from 2 test suites ran. (75408 ms total) 2026-03-09T20:23:06.126 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ PASSED ] 11 tests. 2026-03-09T20:23:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:23:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:23:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94592-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94592-24"}]': finished 2026-03-09T20:23:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.49265 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]': finished 2026-03-09T20:23:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: osdmap e137: 8 total, 8 up, 8 in 2026-03-09T20:23:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:23:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4064831957' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-94310-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1315223518' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-94281-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.49987 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-94310-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:23:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.49990 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-94281-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-94413-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:06 vm09 ceph-mon[54524]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-94413-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94592-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94592-24"}]': finished 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.49265 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]': finished 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: osdmap e137: 8 total, 8 up, 8 in 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4064831957' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-94310-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1315223518' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-94281-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.49987 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-94310-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.49990 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-94281-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-94413-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[61345]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-94413-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94592-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94592-24"}]': finished 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.49265 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-94758-16"}]': finished 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: osdmap e137: 8 total, 8 up, 8 in 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3973316752' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4064831957' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-94310-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1315223518' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-94281-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.49987 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-94310-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.49265 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.49990 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-94281-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-94413-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:06 vm05 ceph-mon[51870]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-94413-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: pgmap v152: 364 pgs: 72 unknown, 292 active+clean; 462 KiB data, 671 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.49987 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-94310-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.49265 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]': finished 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.49990 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-94281-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.49993 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-94413-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-94413-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: osdmap e138: 8 total, 8 up, 8 in 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-94413-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:07 vm09 ceph-mon[54524]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: pgmap v152: 364 pgs: 72 unknown, 292 active+clean; 462 KiB data, 671 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.49987 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-94310-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.49265 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]': finished 2026-03-09T20:23:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.49990 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-94281-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.49993 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-94413-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-94413-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: osdmap e138: 8 total, 8 up, 8 in 2026-03-09T20:23:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-94413-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[61345]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: pgmap v152: 364 pgs: 72 unknown, 292 active+clean; 462 KiB data, 671 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.49987 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-94310-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.49265 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-94758-16"}]': finished 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.49990 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-94281-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.49993 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-94413-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-94413-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: osdmap e138: 8 total, 8 up, 8 in 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-94413-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:07 vm05 ceph-mon[51870]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:08.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:23:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:23:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:23:09.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:09 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:09.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:09 vm09 ceph-mon[54524]: from='client.49999 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:09.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:09.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:09.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:09 vm09 ceph-mon[54524]: osdmap e139: 8 total, 8 up, 8 in 2026-03-09T20:23:09.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:09 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:09.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:09 vm09 ceph-mon[54524]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:09.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:09 vm09 ceph-mon[54524]: pgmap v155: 332 pgs: 40 unknown, 292 active+clean; 462 KiB data, 671 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[61345]: from='client.49999 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[61345]: osdmap e139: 8 total, 8 up, 8 in 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[61345]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[61345]: pgmap v155: 332 pgs: 40 unknown, 292 active+clean; 462 KiB data, 671 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[51870]: from='client.49999 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[51870]: osdmap e139: 8 total, 8 up, 8 in 2026-03-09T20:23:09.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[51870]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:09 vm05 ceph-mon[51870]: pgmap v155: 332 pgs: 40 unknown, 292 active+clean; 462 KiB data, 671 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:10.341 INFO:tasks.workunit.client.0.vm05.stdout:ist: : entry=4 expected=4 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:cfc208b3:::3:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:cfc208b3:::3:head expected=11:cfc208b3:::3:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:cfc208b3:::3:head -> 3 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=3 expected=3 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:c4fdafeb:::6:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:c4fdafeb:::6:head expected=11:c4fdafeb:::6:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:c4fdafeb:::6:head -> 6 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=6 expected=6 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:b29083e3:::5:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:b29083e3:::5:head expected=11:b29083e3:::5:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:b29083e3:::5:head -> 5 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=5 expected=5 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:89d3ae78:::11:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:89d3ae78:::11:head expected=11:89d3ae78:::11:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:89d3ae78:::11:head -> 11 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=11 expected=11 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:863748b0:::15:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:863748b0:::15:head expected=11:863748b0:::15:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:863748b0:::15:head -> 15 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=15 expected=15 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:6cac518f:::0:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:6cac518f:::0:head expected=11:6cac518f:::0:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:6cac518f:::0:head -> 0 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=0 expected=0 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:bd63b0f1:::8:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:bd63b0f1:::8:head expected=11:bd63b0f1:::8:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:bd63b0f1:::8:head -> 8 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=8 expected=8 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 11:02547ec2:::1:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=11:02547ec2:::1:head expected=11:02547ec2:::1:head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 11:02547ec2:::1:head -> 1 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=1 expected=1 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.ListObjectsCursor (742 ms) 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.EnumerateObjects 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.EnumerateObjects (63560 ms) 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.EnumerateObjectsSplit 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: split 0/5 -> MIN 11:33333333::::head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: split 1/5 -> 11:33333333::::head 11:66666666::::head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: split 2/5 -> 11:66666666::::head 11:99999999::::head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: split 3/5 -> 11:99999999::::head 11:cccccccc::::head 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: split 4/5 -> 11:cccccccc::::head MAX 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.EnumerateObjectsSplit (8571 ms) 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] 7 tests from LibRadosList (73748 ms total) 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] 3 tests from LibRadosListEC 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosListEC.ListObjects 2026-03-09T20:23:10.342 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosListEC.ListObjects (1068 ms) 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosListEC.ListObjectsNS 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset foo1,foo2,foo3 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo1 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo2 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo3 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset foo1,foo4,foo5 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo4 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo5 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo1 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset foo6,foo7 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo7 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo6 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset :foo1,:foo2,:foo3,ns1:foo1,ns1:foo4,ns1:foo5,ns2:foo6,ns2:foo7 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns1:foo4 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns1:foo5 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns2:foo7 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns2:foo6 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns1:foo1 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: :foo1 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: :foo2 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: :foo3 2026-03-09T20:23:10.343 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosListEC.ListObjectsNS (31 ms) 2026-03-09T20:23:10.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:10 vm09 ceph-mon[54524]: from='client.49993 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-94413-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-94413-2"}]': finished 2026-03-09T20:23:10.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:10 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:10.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:10 vm09 ceph-mon[54524]: osdmap e140: 8 total, 8 up, 8 in 2026-03-09T20:23:10.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/528393856' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-94281-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:10.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2861360550' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-94310-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:10.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:10 vm09 ceph-mon[54524]: from='client.50008 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-94281-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:10.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:10 vm09 ceph-mon[54524]: from='client.50011 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-94310-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:10.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[61345]: from='client.49993 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-94413-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-94413-2"}]': finished 2026-03-09T20:23:10.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:10.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[61345]: osdmap e140: 8 total, 8 up, 8 in 2026-03-09T20:23:10.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/528393856' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-94281-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:10.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2861360550' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-94310-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:10.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[61345]: from='client.50008 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-94281-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:10.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[61345]: from='client.50011 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-94310-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[51870]: from='client.49993 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-94413-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-94413-2"}]': finished 2026-03-09T20:23:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94592-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[51870]: osdmap e140: 8 total, 8 up, 8 in 2026-03-09T20:23:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/528393856' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-94281-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2861360550' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-94310-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[51870]: from='client.50008 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-94281-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:10 vm05 ceph-mon[51870]: from='client.50011 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-94310-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:11 vm09 ceph-mon[54524]: pgmap v157: 404 pgs: 112 unknown, 292 active+clean; 462 KiB data, 671 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:11 vm09 ceph-mon[54524]: from='client.49999 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]': finished 2026-03-09T20:23:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:11 vm09 ceph-mon[54524]: from='client.50008 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-94281-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:11 vm09 ceph-mon[54524]: from='client.50011 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-94310-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:11 vm09 ceph-mon[54524]: osdmap e141: 8 total, 8 up, 8 in 2026-03-09T20:23:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:11 vm09 ceph-mon[54524]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:11 vm09 ceph-mon[54524]: osdmap e142: 8 total, 8 up, 8 in 2026-03-09T20:23:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[61345]: pgmap v157: 404 pgs: 112 unknown, 292 active+clean; 462 KiB data, 671 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[61345]: from='client.49999 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]': finished 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[61345]: from='client.50008 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-94281-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[61345]: from='client.50011 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-94310-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[61345]: osdmap e141: 8 total, 8 up, 8 in 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[61345]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[61345]: osdmap e142: 8 total, 8 up, 8 in 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[51870]: pgmap v157: 404 pgs: 112 unknown, 292 active+clean; 462 KiB data, 671 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[51870]: from='client.49999 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-94758-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]': finished 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[51870]: from='client.50008 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-94281-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[51870]: from='client.50011 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-94310-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:11.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[51870]: osdmap e141: 8 total, 8 up, 8 in 2026-03-09T20:23:11.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[51870]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:11.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[51870]: osdmap e142: 8 total, 8 up, 8 in 2026-03-09T20:23:11.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:11.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:11.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:12.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:12.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-94281-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-94281-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.49993 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]': finished 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]': finished 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.50023 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-94281-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: osdmap e143: 8 total, 8 up, 8 in 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-94281-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3296192592' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94310-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-94281-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.50017 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94310-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:12.278 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:12 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-94281-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-94281-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.49993 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]': finished 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]': finished 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.50023 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-94281-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: osdmap e143: 8 total, 8 up, 8 in 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-94281-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3296192592' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94310-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-94281-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.50017 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94310-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:12.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-94281-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-94281-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.49993 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-94413-2"}]': finished 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94592-24"}]': finished 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.50023 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-94281-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: osdmap e143: 8 total, 8 up, 8 in 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1453567968' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1976652207' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-94281-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3296192592' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94310-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.49993 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-94281-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.50017 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94310-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:12 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]: dispatch 2026-03-09T20:23:13.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:13 vm05 ceph-mon[61345]: pgmap v160: 300 pgs: 8 creating+peering, 292 active+clean; 462 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:13.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:13 vm05 ceph-mon[61345]: from='client.49993 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]': finished 2026-03-09T20:23:13.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:13 vm05 ceph-mon[61345]: from='client.50017 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94310-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:13.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:13 vm05 ceph-mon[61345]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]': finished 2026-03-09T20:23:13.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:13 vm05 ceph-mon[61345]: osdmap e144: 8 total, 8 up, 8 in 2026-03-09T20:23:13.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:13 vm05 ceph-mon[51870]: pgmap v160: 300 pgs: 8 creating+peering, 292 active+clean; 462 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:13.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:13 vm05 ceph-mon[51870]: from='client.49993 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]': finished 2026-03-09T20:23:13.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:13 vm05 ceph-mon[51870]: from='client.50017 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94310-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:13.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:13 vm05 ceph-mon[51870]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]': finished 2026-03-09T20:23:13.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:13 vm05 ceph-mon[51870]: osdmap e144: 8 total, 8 up, 8 in 2026-03-09T20:23:13.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:13 vm09 ceph-mon[54524]: pgmap v160: 300 pgs: 8 creating+peering, 292 active+clean; 462 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:13.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:13 vm09 ceph-mon[54524]: from='client.49993 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-94413-2"}]': finished 2026-03-09T20:23:13.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:13 vm09 ceph-mon[54524]: from='client.50017 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-94310-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:13.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:13 vm09 ceph-mon[54524]: from='client.49316 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94592-24"}]': finished 2026-03-09T20:23:13.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:13 vm09 ceph-mon[54524]: osdmap e144: 8 total, 8 up, 8 in 2026-03-09T20:23:15.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[61345]: pgmap v163: 332 pgs: 32 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:15.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[61345]: from='client.50023 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-94281-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-94281-27"}]': finished 2026-03-09T20:23:15.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[61345]: osdmap e145: 8 total, 8 up, 8 in 2026-03-09T20:23:15.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/968643416' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94413-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1165992837' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-94592-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[61345]: from='client.49364 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-94592-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[61345]: from='client.50029 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94413-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[51870]: pgmap v163: 332 pgs: 32 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[51870]: from='client.50023 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-94281-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-94281-27"}]': finished 2026-03-09T20:23:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[51870]: osdmap e145: 8 total, 8 up, 8 in 2026-03-09T20:23:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/968643416' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94413-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1165992837' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-94592-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[51870]: from='client.49364 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-94592-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[51870]: from='client.50029 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94413-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:15.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:15 vm09 ceph-mon[54524]: pgmap v163: 332 pgs: 32 unknown, 8 creating+peering, 292 active+clean; 462 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:15.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:15 vm09 ceph-mon[54524]: from='client.50023 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-94281-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-94281-27"}]': finished 2026-03-09T20:23:15.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:15 vm09 ceph-mon[54524]: osdmap e145: 8 total, 8 up, 8 in 2026-03-09T20:23:15.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/968643416' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94413-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:15.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1165992837' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-94592-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:15.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:15 vm09 ceph-mon[54524]: from='client.49364 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-94592-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:15.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:15 vm09 ceph-mon[54524]: from='client.50029 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94413-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:15.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:15.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:23:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosListEC.ListObjectsStart 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 1 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 10 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 13 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 7 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 14 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 0 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 15 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 11 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 5 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 8 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 6 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 3 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 4 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 12 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 9 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 2 0 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: have 1 expect one of 0,1,10,11,12,13,14,15,2,3,4,5,6,7,8,9 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosListEC.ListObjectsStart (50 ms) 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] 3 tests from LibRadosListEC (1149 ms total) 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] 1 test from LibRadosListNP 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosListNP.ListObjectsError 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosListNP.ListObjectsError (3112 ms) 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] 1 test from LibRadosListNP (3112 ms total) 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] Global test environment tear-down 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [==========] 11 tests from 3 test suites ran. (86631 ms total) 2026-03-09T20:23:16.346 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ PASSED ] 11 tests. 2026-03-09T20:23:16.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.49364 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-94592-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:16.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.50029 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94413-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:16.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:16.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: osdmap e146: 8 total, 8 up, 8 in 2026-03-09T20:23:16.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/575786612' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-94310-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:16.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.49367 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-94310-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:16.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/968643416' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-94413-3","pool2":"test-rados-api-vm05-94413-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.50029 ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-94413-3","pool2":"test-rados-api-vm05-94413-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.49367 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-94310-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.50029 ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-94413-3","pool2":"test-rados-api-vm05-94413-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: osdmap e147: 8 total, 8 up, 8 in 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13"}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13"}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[61345]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.49364 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-94592-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.50029 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94413-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: osdmap e146: 8 total, 8 up, 8 in 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/575786612' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-94310-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.49367 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-94310-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/968643416' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-94413-3","pool2":"test-rados-api-vm05-94413-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.50029 ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-94413-3","pool2":"test-rados-api-vm05-94413-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.49367 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-94310-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.50029 ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-94413-3","pool2":"test-rados-api-vm05-94413-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: osdmap e147: 8 total, 8 up, 8 in 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13"}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13"}]: dispatch 2026-03-09T20:23:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:16 vm05 ceph-mon[51870]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:16.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.49364 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-94592-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:16.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.50029 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94413-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:16.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:16.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: osdmap e146: 8 total, 8 up, 8 in 2026-03-09T20:23:16.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/575786612' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-94310-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:16.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.49367 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-94310-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:16.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/968643416' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-94413-3","pool2":"test-rados-api-vm05-94413-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T20:23:16.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.50029 ' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-94413-3","pool2":"test-rados-api-vm05-94413-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-09T20:23:16.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.49367 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-94310-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.50029 ' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-94413-3","pool2":"test-rados-api-vm05-94413-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-09T20:23:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: osdmap e147: 8 total, 8 up, 8 in 2026-03-09T20:23:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13"}]: dispatch 2026-03-09T20:23:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13"}]: dispatch 2026-03-09T20:23:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:16 vm09 ceph-mon[54524]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:17.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[61345]: pgmap v166: 404 pgs: 21 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 23 unknown, 54 creating+peering, 300 active+clean; 463 KiB data, 677 MiB used, 159 GiB / 160 GiB avail; 4.2 KiB/s rd, 5.0 KiB/s wr, 5 op/s 2026-03-09T20:23:17.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:17.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13"}]': finished 2026-03-09T20:23:17.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[61345]: from='client.50023 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]': finished 2026-03-09T20:23:17.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[61345]: osdmap e148: 8 total, 8 up, 8 in 2026-03-09T20:23:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[61345]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1409580219' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-94592-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[51870]: pgmap v166: 404 pgs: 21 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 23 unknown, 54 creating+peering, 300 active+clean; 463 KiB data, 677 MiB used, 159 GiB / 160 GiB avail; 4.2 KiB/s rd, 5.0 KiB/s wr, 5 op/s 2026-03-09T20:23:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13"}]': finished 2026-03-09T20:23:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[51870]: from='client.50023 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]': finished 2026-03-09T20:23:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[51870]: osdmap e148: 8 total, 8 up, 8 in 2026-03-09T20:23:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[51870]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1409580219' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-94592-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:17.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:17 vm09 ceph-mon[54524]: pgmap v166: 404 pgs: 21 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 23 unknown, 54 creating+peering, 300 active+clean; 463 KiB data, 677 MiB used, 159 GiB / 160 GiB avail; 4.2 KiB/s rd, 5.0 KiB/s wr, 5 op/s 2026-03-09T20:23:17.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:17 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:17.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:17 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-13"}]': finished 2026-03-09T20:23:17.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:17 vm09 ceph-mon[54524]: from='client.50023 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-94281-27"}]': finished 2026-03-09T20:23:17.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1247538280' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:17.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:17 vm09 ceph-mon[54524]: osdmap e148: 8 total, 8 up, 8 in 2026-03-09T20:23:17.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:17 vm09 ceph-mon[54524]: from='client.50023 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]: dispatch 2026-03-09T20:23:17.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1409580219' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-94592-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: Running main() from gmock_main.cc 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [==========] Running 42 tests from 2 test suites. 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [----------] Global test environment set-up. 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [----------] 26 tests from LibRadosAio 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.TooBig 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.TooBig (2909 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.SimpleWrite 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.SimpleWrite (2988 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.WaitForSafe 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.WaitForSafe (3905 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip (2743 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip2 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip2 (3076 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip3 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip3 (2934 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripAppend 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RoundTripAppend (3293 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RemoveTest 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RemoveTest (3041 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.XattrsRoundTrip 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.XattrsRoundTrip (3201 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RmXattr 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RmXattr (3070 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.XattrIter 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.XattrIter (3359 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.IsComplete 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.IsComplete (3194 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.IsSafe 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.IsSafe (3751 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.ReturnValue 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.ReturnValue (4468 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.Flush 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.Flush (3265 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.FlushAsync 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.FlushAsync (2207 ms) 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripWriteFull 2026-03-09T20:23:18.345 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RoundTripWriteFull (3163 ms) 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripWriteSame 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RoundTripWriteSame (3095 ms) 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.SimpleStat 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.SimpleStat (3151 ms) 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.OperateMtime 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.OperateMtime (3128 ms) 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.Operate2Mtime 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.Operate2Mtime (3013 ms) 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.SimpleStatNS 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.SimpleStatNS (3140 ms) 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.StatRemove 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.StatRemove (2310 ms) 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.ExecuteClass 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.ExecuteClass (3111 ms) 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.MultiWrite 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.MultiWrite (3060 ms) 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.AioUnlock 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.AioUnlock (3045 ms) 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [----------] 26 tests from LibRadosAio (81620 ms total) 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [----------] 16 tests from LibRadosAioEC 2026-03-09T20:23:18.346 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleWrite 2026-03-09T20:23:18.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:23:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:23:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:23:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:19 vm09 ceph-mon[54524]: pgmap v169: 332 pgs: 21 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 32 unknown, 273 active+clean; 459 KiB data, 677 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 5 op/s 2026-03-09T20:23:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:19 vm09 ceph-mon[54524]: from='client.50023 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]': finished 2026-03-09T20:23:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1409580219' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-94592-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:19 vm09 ceph-mon[54524]: osdmap e149: 8 total, 8 up, 8 in 2026-03-09T20:23:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3571759328' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-94310-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:19 vm09 ceph-mon[54524]: from='client.50041 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-94310-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:19 vm09 ceph-mon[54524]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:19 vm09 ceph-mon[54524]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-94281-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:19 vm09 ceph-mon[54524]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-94281-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:19.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[61345]: pgmap v169: 332 pgs: 21 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 32 unknown, 273 active+clean; 459 KiB data, 677 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 5 op/s 2026-03-09T20:23:19.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[61345]: from='client.50023 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]': finished 2026-03-09T20:23:19.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1409580219' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-94592-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:19.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[61345]: osdmap e149: 8 total, 8 up, 8 in 2026-03-09T20:23:19.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3571759328' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-94310-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[61345]: from='client.50041 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-94310-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[61345]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[61345]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-94281-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[61345]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-94281-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[51870]: pgmap v169: 332 pgs: 21 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 32 unknown, 273 active+clean; 459 KiB data, 677 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 5 op/s 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[51870]: from='client.50023 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-94281-27"}]': finished 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1409580219' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-94592-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[51870]: osdmap e149: 8 total, 8 up, 8 in 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3571759328' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-94310-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[51870]: from='client.50041 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-94310-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[51870]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[51870]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-94281-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:19 vm05 ceph-mon[51870]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-94281-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:20.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:20 vm09 ceph-mon[54524]: from='client.50041 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-94310-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:20.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:20 vm09 ceph-mon[54524]: from='client.50047 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-94281-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:20.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:20 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-94281-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:20 vm09 ceph-mon[54524]: osdmap e150: 8 total, 8 up, 8 in 2026-03-09T20:23:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:20 vm09 ceph-mon[54524]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-94281-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:20 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:20 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:20 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:20 vm09 ceph-mon[54524]: osdmap e151: 8 total, 8 up, 8 in 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[61345]: from='client.50041 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-94310-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[61345]: from='client.50047 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-94281-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-94281-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[61345]: osdmap e150: 8 total, 8 up, 8 in 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[61345]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-94281-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[61345]: osdmap e151: 8 total, 8 up, 8 in 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[51870]: from='client.50041 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-94310-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[51870]: from='client.50047 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-94281-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-94281-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[51870]: osdmap e150: 8 total, 8 up, 8 in 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[51870]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-94281-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:20.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:20.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:20 vm05 ceph-mon[51870]: osdmap e151: 8 total, 8 up, 8 in 2026-03-09T20:23:21.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:21 vm09 ceph-mon[54524]: pgmap v172: 332 pgs: 7 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 64 unknown, 256 active+clean; 457 KiB data, 677 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:21.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1873283213' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-94592-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:21.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:21 vm09 ceph-mon[54524]: from='client.50050 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-94592-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:21.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:21 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:21.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:21 vm09 ceph-mon[54524]: from='client.50047 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-94281-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-94281-28"}]': finished 2026-03-09T20:23:21.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:21 vm09 ceph-mon[54524]: from='client.50050 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-94592-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1739667319' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-94310-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:21 vm09 ceph-mon[54524]: osdmap e152: 8 total, 8 up, 8 in 2026-03-09T20:23:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:21 vm09 ceph-mon[54524]: from='client.50056 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-94310-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:21.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[61345]: pgmap v172: 332 pgs: 7 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 64 unknown, 256 active+clean; 457 KiB data, 677 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:21.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1873283213' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-94592-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:21.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[61345]: from='client.50050 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-94592-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:21.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[61345]: from='client.50047 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-94281-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-94281-28"}]': finished 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[61345]: from='client.50050 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-94592-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1739667319' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-94310-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[61345]: osdmap e152: 8 total, 8 up, 8 in 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[61345]: from='client.50056 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-94310-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[51870]: pgmap v172: 332 pgs: 7 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 64 unknown, 256 active+clean; 457 KiB data, 677 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1873283213' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-94592-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[51870]: from='client.50050 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-94592-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[51870]: from='client.50047 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-94281-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-94281-28"}]': finished 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[51870]: from='client.50050 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-94592-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1739667319' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-94310-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[51870]: osdmap e152: 8 total, 8 up, 8 in 2026-03-09T20:23:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:21 vm05 ceph-mon[51870]: from='client.50056 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-94310-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:22.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:22.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:22 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:22.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:22 vm09 ceph-mon[54524]: from='client.50056 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-94310-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:22.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:22 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:22.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:22 vm09 ceph-mon[54524]: osdmap e153: 8 total, 8 up, 8 in 2026-03-09T20:23:22.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-15"}]: dispatch 2026-03-09T20:23:22.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:22 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-15"}]: dispatch 2026-03-09T20:23:22.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:22.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:22.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:22.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:22.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:22.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[61345]: from='client.50056 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-94310-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:22.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:22.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[61345]: osdmap e153: 8 total, 8 up, 8 in 2026-03-09T20:23:22.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-15"}]: dispatch 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-15"}]: dispatch 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[51870]: from='client.50056 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-94310-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[51870]: osdmap e153: 8 total, 8 up, 8 in 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-15"}]: dispatch 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-15"}]: dispatch 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:23.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:23 vm09 ceph-mon[54524]: pgmap v175: 372 pgs: 36 creating+peering, 6 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 55 unknown, 269 active+clean; 489 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:23.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:23 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-15"}]': finished 2026-03-09T20:23:23.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:23.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-15", "mode": "writeback"}]: dispatch 2026-03-09T20:23:23.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:23 vm09 ceph-mon[54524]: osdmap e154: 8 total, 8 up, 8 in 2026-03-09T20:23:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-94592-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:23 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-15", "mode": "writeback"}]: dispatch 2026-03-09T20:23:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:23 vm09 ceph-mon[54524]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:23 vm09 ceph-mon[54524]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:23.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[61345]: pgmap v175: 372 pgs: 36 creating+peering, 6 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 55 unknown, 269 active+clean; 489 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:23.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-15"}]': finished 2026-03-09T20:23:23.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:23.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-15", "mode": "writeback"}]: dispatch 2026-03-09T20:23:23.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[61345]: osdmap e154: 8 total, 8 up, 8 in 2026-03-09T20:23:23.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:23.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:23.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-94592-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-15", "mode": "writeback"}]: dispatch 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[61345]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[61345]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[51870]: pgmap v175: 372 pgs: 36 creating+peering, 6 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 55 unknown, 269 active+clean; 489 KiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-15"}]': finished 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-15", "mode": "writeback"}]: dispatch 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[51870]: osdmap e154: 8 total, 8 up, 8 in 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-94592-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-15", "mode": "writeback"}]: dispatch 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[51870]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:23 vm05 ceph-mon[51870]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:24.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:24 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:23:24.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:24 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:23:25.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:24 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: ackPP (1300 ms) 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapGetNamePP 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapGetNamePP (2112 ms) 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 4 tests from LibRadosSnapshotsECPP (8554 ms total) 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 3 tests from LibRadosSnapshotsSelfManagedECPP 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.SnapPP 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.SnapPP (4145 ms) 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.RollbackPP 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.RollbackPP (3971 ms) 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.Bug11677 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.Bug11677 (4074 ms) 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 3 tests from LibRadosSnapshotsSelfManagedECPP (12190 ms total) 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] Global test environment tear-down 2026-03-09T20:23:25.528 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [==========] 21 tests from 5 test suites ran. (95650 ms total) 2026-03-09T20:23:25.529 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ PASSED ] 20 tests. 2026-03-09T20:23:25.529 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ SKIPPED ] 1 test, listed below: 2026-03-09T20:23:25.529 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ SKIPPED ] LibRadosSnapshotsSelfManagedPP.WriteRollback 2026-03-09T20:23:25.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: pgmap v178: 292 pgs: 19 creating+peering, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 263 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-09T20:23:25.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-15", "mode": "writeback"}]': finished 2026-03-09T20:23:25.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: from='client.49999 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]': finished 2026-03-09T20:23:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: from='client.50047 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]': finished 2026-03-09T20:23:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: osdmap e155: 8 total, 8 up, 8 in 2026-03-09T20:23:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3782938913' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-94310-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-94592-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-94592-36"}]': finished 2026-03-09T20:23:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: from='client.49999 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]': finished 2026-03-09T20:23:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: from='client.50047 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]': finished 2026-03-09T20:23:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3782938913' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-94310-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:25 vm09 ceph-mon[54524]: osdmap e156: 8 total, 8 up, 8 in 2026-03-09T20:23:25.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:23:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: pgmap v178: 292 pgs: 19 creating+peering, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 263 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-15", "mode": "writeback"}]': finished 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: from='client.49999 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]': finished 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: from='client.50047 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]': finished 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: osdmap e155: 8 total, 8 up, 8 in 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3782938913' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-94310-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-94592-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-94592-36"}]': finished 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: from='client.49999 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]': finished 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: from='client.50047 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]': finished 2026-03-09T20:23:25.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3782938913' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-94310-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[61345]: osdmap e156: 8 total, 8 up, 8 in 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: pgmap v178: 292 pgs: 19 creating+peering, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 263 active+clean; 457 KiB data, 679 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-15", "mode": "writeback"}]': finished 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: from='client.49999 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]': finished 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: from='client.50047 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-94281-28"}]': finished 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1485324066' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/957654116' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: osdmap e155: 8 total, 8 up, 8 in 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: from='client.49999 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]: dispatch 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: from='client.50047 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]: dispatch 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3782938913' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-94310-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-94592-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-94592-36"}]': finished 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: from='client.49999 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-94758-21"}]': finished 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: from='client.50047 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-94281-28"}]': finished 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3782938913' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-94310-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:25 vm05 ceph-mon[51870]: osdmap e156: 8 total, 8 up, 8 in 2026-03-09T20:23:26.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:26.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:26.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:26.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:26.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:26.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-94281-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:26.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-94281-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:26.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:26.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:26.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.50071 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-94281-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-94281-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: osdmap e157: 8 total, 8 up, 8 in 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[61345]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-94281-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-94281-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-94281-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.50071 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-94281-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-94281-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: osdmap e157: 8 total, 8 up, 8 in 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15"}]: dispatch 2026-03-09T20:23:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:26 vm05 ceph-mon[51870]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-94281-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:27.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:27.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-94281-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-94281-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.50071 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-94281-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15"}]: dispatch 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-94281-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: osdmap e157: 8 total, 8 up, 8 in 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15"}]: dispatch 2026-03-09T20:23:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:26 vm09 ceph-mon[54524]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-94281-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:27.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[61345]: pgmap v181: 332 pgs: 1 creating+activating, 28 creating+peering, 7 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 281 active+clean; 457 KiB data, 688 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:27.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:27.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15"}]': finished 2026-03-09T20:23:27.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[61345]: osdmap e158: 8 total, 8 up, 8 in 2026-03-09T20:23:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/75081985' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-94310-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[61345]: from='client.50077 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-94310-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[51870]: pgmap v181: 332 pgs: 1 creating+activating, 28 creating+peering, 7 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 281 active+clean; 457 KiB data, 688 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15"}]': finished 2026-03-09T20:23:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[51870]: osdmap e158: 8 total, 8 up, 8 in 2026-03-09T20:23:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/75081985' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-94310-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[51870]: from='client.50077 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-94310-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:28.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:27 vm09 ceph-mon[54524]: pgmap v181: 332 pgs: 1 creating+activating, 28 creating+peering, 7 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 281 active+clean; 457 KiB data, 688 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:28.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:27 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:28.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:27 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-15"}]': finished 2026-03-09T20:23:28.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:27 vm09 ceph-mon[54524]: osdmap e158: 8 total, 8 up, 8 in 2026-03-09T20:23:28.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/75081985' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-94310-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:28.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:27 vm09 ceph-mon[54524]: from='client.50077 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-94310-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:28.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:28.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:23:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:23:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout:cPP (70517 ms total) 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 1 test from LibRadosTwoPoolsECPP 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosTwoPoolsECPP.CopyFrom 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosTwoPoolsECPP.CopyFrom (109 ms) 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 1 test from LibRadosTwoPoolsECPP (109 ms total) 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/0, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)0, Checksummer::xxhash32, ceph_le > 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/0.Subset 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosChecksum/0.Subset (39 ms) 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/0.Chunked 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosChecksum/0.Chunked (45 ms) 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/0 (84 ms total) 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/1, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)1, Checksummer::xxhash64, ceph_le > 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/1.Subset 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosChecksum/1.Subset (75 ms) 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/1.Chunked 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosChecksum/1.Chunked (2 ms) 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/1 (77 ms total) 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/2, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)2, Checksummer::crc32c, ceph_le > 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/2.Subset 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosChecksum/2.Subset (51 ms) 2026-03-09T20:23:29.542 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/2.Chunked 2026-03-09T20:23:29.543 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosChecksum/2.Chunked (3 ms) 2026-03-09T20:23:29.543 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/2 (54 ms total) 2026-03-09T20:23:29.543 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-09T20:23:29.543 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscECPP 2026-03-09T20:23:29.543 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscECPP.CompareExtentRange 2026-03-09T20:23:29.543 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscECPP.CompareExtentRange (1051 ms) 2026-03-09T20:23:29.543 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscECPP (1051 ms total) 2026-03-09T20:23:29.543 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-09T20:23:29.543 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] Global test environment tear-down 2026-03-09T20:23:29.543 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [==========] 31 tests from 7 test suites ran. (99721 ms total) 2026-03-09T20:23:29.543 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ PASSED ] 31 tests. 2026-03-09T20:23:29.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[61345]: pgmap v184: 324 pgs: 32 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 277 active+clean; 457 KiB data, 688 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:23:29.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[61345]: from='client.50071 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-94281-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-94281-29"}]': finished 2026-03-09T20:23:29.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[61345]: from='client.50077 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-94310-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:29.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36"}]': finished 2026-03-09T20:23:29.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[61345]: osdmap e159: 8 total, 8 up, 8 in 2026-03-09T20:23:29.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:29.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:23:29.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:29.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[51870]: pgmap v184: 324 pgs: 32 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 277 active+clean; 457 KiB data, 688 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:23:29.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[51870]: from='client.50071 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-94281-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-94281-29"}]': finished 2026-03-09T20:23:29.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[51870]: from='client.50077 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-94310-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:29.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36"}]': finished 2026-03-09T20:23:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[51870]: osdmap e159: 8 total, 8 up, 8 in 2026-03-09T20:23:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:23:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:29 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:30.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:29 vm09 ceph-mon[54524]: pgmap v184: 324 pgs: 32 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 277 active+clean; 457 KiB data, 688 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:23:30.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:29 vm09 ceph-mon[54524]: from='client.50071 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-94281-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-94281-29"}]': finished 2026-03-09T20:23:30.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:29 vm09 ceph-mon[54524]: from='client.50077 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-94310-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:30.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-94592-36"}]': finished 2026-03-09T20:23:30.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:29 vm09 ceph-mon[54524]: osdmap e159: 8 total, 8 up, 8 in 2026-03-09T20:23:30.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-94592-36"}]: dispatch 2026-03-09T20:23:30.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:29 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:23:30.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:29 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:30.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-94592-36"}]': finished 2026-03-09T20:23:30.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:30.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[61345]: osdmap e160: 8 total, 8 up, 8 in 2026-03-09T20:23:30.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:30.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:30.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[61345]: osdmap e161: 8 total, 8 up, 8 in 2026-03-09T20:23:30.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:30.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2477994326' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-94310-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:30.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[61345]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:30.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[61345]: from='client.50080 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-94310-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-94592-36"}]': finished 2026-03-09T20:23:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[51870]: osdmap e160: 8 total, 8 up, 8 in 2026-03-09T20:23:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[51870]: osdmap e161: 8 total, 8 up, 8 in 2026-03-09T20:23:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2477994326' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-94310-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[51870]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:30 vm05 ceph-mon[51870]: from='client.50080 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-94310-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:31.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4075145225' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-94592-36"}]': finished 2026-03-09T20:23:31.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:31.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:30 vm09 ceph-mon[54524]: osdmap e160: 8 total, 8 up, 8 in 2026-03-09T20:23:31.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:30 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:31.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:30 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:31.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:30 vm09 ceph-mon[54524]: osdmap e161: 8 total, 8 up, 8 in 2026-03-09T20:23:31.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:31.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2477994326' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-94310-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:31.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:30 vm09 ceph-mon[54524]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:31.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:30 vm09 ceph-mon[54524]: from='client.50080 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-94310-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:31.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[61345]: pgmap v187: 300 pgs: 40 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 457 KiB data, 688 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[61345]: from='client.50071 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]': finished 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[61345]: from='client.50080 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-94310-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-17"}]: dispatch 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[61345]: osdmap e162: 8 total, 8 up, 8 in 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[61345]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-17"}]: dispatch 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[51870]: pgmap v187: 300 pgs: 40 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 457 KiB data, 688 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[51870]: from='client.50071 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]': finished 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[51870]: from='client.50080 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-94310-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-17"}]: dispatch 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[51870]: osdmap e162: 8 total, 8 up, 8 in 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[51870]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:31 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-17"}]: dispatch 2026-03-09T20:23:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:31 vm09 ceph-mon[54524]: pgmap v187: 300 pgs: 40 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 457 KiB data, 688 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:31 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:31 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:31 vm09 ceph-mon[54524]: from='client.50071 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-94281-29"}]': finished 2026-03-09T20:23:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:31 vm09 ceph-mon[54524]: from='client.50080 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-94310-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:31 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1949899526' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-17"}]: dispatch 2026-03-09T20:23:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:31 vm09 ceph-mon[54524]: osdmap e162: 8 total, 8 up, 8 in 2026-03-09T20:23:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:31 vm09 ceph-mon[54524]: from='client.50071 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]: dispatch 2026-03-09T20:23:32.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:31 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-17"}]: dispatch 2026-03-09T20:23:33.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[61345]: pgmap v190: 324 pgs: 32 creating+peering, 29 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 253 active+clean; 457 KiB data, 676 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:33.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[61345]: from='client.50071 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]': finished 2026-03-09T20:23:33.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-17"}]': finished 2026-03-09T20:23:33.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-17", "mode": "writeback"}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[61345]: osdmap e163: 8 total, 8 up, 8 in 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-17", "mode": "writeback"}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[61345]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[61345]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-94281-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[61345]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-94281-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[51870]: pgmap v190: 324 pgs: 32 creating+peering, 29 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 253 active+clean; 457 KiB data, 676 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[51870]: from='client.50071 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]': finished 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-17"}]': finished 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-17", "mode": "writeback"}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[51870]: osdmap e163: 8 total, 8 up, 8 in 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-17", "mode": "writeback"}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[51870]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[51870]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-94281-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:33 vm05 ceph-mon[51870]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-94281-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:33 vm09 ceph-mon[54524]: pgmap v190: 324 pgs: 32 creating+peering, 29 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 253 active+clean; 457 KiB data, 676 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:33 vm09 ceph-mon[54524]: from='client.50071 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-94281-29"}]': finished 2026-03-09T20:23:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:33 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-17"}]': finished 2026-03-09T20:23:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-17", "mode": "writeback"}]: dispatch 2026-03-09T20:23:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:33 vm09 ceph-mon[54524]: osdmap e163: 8 total, 8 up, 8 in 2026-03-09T20:23:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:33 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-17", "mode": "writeback"}]: dispatch 2026-03-09T20:23:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:33 vm09 ceph-mon[54524]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:33 vm09 ceph-mon[54524]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-94281-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:33 vm09 ceph-mon[54524]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-94281-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:23:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-17", "mode": "writeback"}]': finished 2026-03-09T20:23:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: from='client.50089 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-94281-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-94281-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: osdmap e164: 8 total, 8 up, 8 in 2026-03-09T20:23:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-94281-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/52937016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-94310-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: from='client.50092 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-94310-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: from='client.50092 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-94310-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17"}]: dispatch 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: osdmap e165: 8 total, 8 up, 8 in 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17"}]: dispatch 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-17", "mode": "writeback"}]': finished 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: from='client.50089 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-94281-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-94281-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: osdmap e164: 8 total, 8 up, 8 in 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-94281-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/52937016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-94310-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: from='client.50092 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-94310-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: from='client.50092 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-94310-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17"}]: dispatch 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: osdmap e165: 8 total, 8 up, 8 in 2026-03-09T20:23:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:34 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17"}]: dispatch 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-17", "mode": "writeback"}]': finished 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: from='client.50089 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-94281-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-94281-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: osdmap e164: 8 total, 8 up, 8 in 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-94281-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/52937016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-94310-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: from='client.50092 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-94310-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: from='client.50092 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-94310-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17"}]: dispatch 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: osdmap e165: 8 total, 8 up, 8 in 2026-03-09T20:23:35.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:34 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17"}]: dispatch 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: Running main() from gmock_main.cc 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [==========] Running 57 tests from 4 test suites. 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] Global test environment set-up. 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 32 tests from LibRadosAio 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.TooBigPP 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.TooBigPP (2873 ms) 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.PoolQuotaPP 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.PoolQuotaPP (20927 ms) 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleWritePP 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleWritePP (7261 ms) 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.WaitForSafePP 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.WaitForSafePP (3397 ms) 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP (3223 ms) 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP2 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP2 (2999 ms) 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP3 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP3 (4171 ms) 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripSparseReadPP 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripSparseReadPP (3269 ms) 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.IsCompletePP 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.IsCompletePP (3040 ms) 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.IsSafePP 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.IsSafePP (3289 ms) 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.ReturnValuePP 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.ReturnValuePP (3207 ms) 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.FlushPP 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.FlushPP (3126 ms) 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.FlushAsyncPP 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.FlushAsyncPP (3131 ms) 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteFullPP 2026-03-09T20:23:35.610 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteFullPP (3008 ms) 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteFullPP2 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteFullPP2 (3145 ms) 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteSamePP 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteSamePP (2310 ms) 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteSamePP2 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteSamePP2 (3121 ms) 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleStatPPNS 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleStatPPNS (3052 ms) 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleStatPP 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleStatPP (3042 ms) 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.OperateMtime 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.OperateMtime (3114 ms) 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.OperateMtime2 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.OperateMtime2 (3016 ms) 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.StatRemovePP 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.StatRemovePP (3070 ms) 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.ExecuteClassPP 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.ExecuteClassPP (3015 ms) 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.OmapPP 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.OmapPP (3084 ms) 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.MultiWritePP 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.MultiWritePP (3017 ms) 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.AioUnlockPP 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.AioUnlockPP (3004 ms) 2026-03-09T20:23:35.611 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripAppendPP 2026-03-09T20:23:35.621 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:23:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:23:35.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:35 vm05 ceph-mon[61345]: pgmap v193: 324 pgs: 32 unknown, 29 creating+peering, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 253 active+clean; 457 KiB data, 676 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:35.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:35 vm05 ceph-mon[51870]: pgmap v193: 324 pgs: 32 unknown, 29 creating+peering, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 253 active+clean; 457 KiB data, 676 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:36.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:35 vm09 ceph-mon[54524]: pgmap v193: 324 pgs: 32 unknown, 29 creating+peering, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 253 active+clean; 457 KiB data, 676 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:36.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:36.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:36 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:36.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:36 vm05 ceph-mon[61345]: from='client.50089 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-94281-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-94281-30"}]': finished 2026-03-09T20:23:36.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:36 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17"}]': finished 2026-03-09T20:23:36.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:36 vm05 ceph-mon[61345]: osdmap e166: 8 total, 8 up, 8 in 2026-03-09T20:23:36.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:36 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:36.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:36.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:36 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:36.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:36 vm05 ceph-mon[51870]: from='client.50089 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-94281-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-94281-30"}]': finished 2026-03-09T20:23:36.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:36 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17"}]': finished 2026-03-09T20:23:36.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:36 vm05 ceph-mon[51870]: osdmap e166: 8 total, 8 up, 8 in 2026-03-09T20:23:36.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:36 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:37.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:37.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:36 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:37.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:36 vm09 ceph-mon[54524]: from='client.50089 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-94281-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-94281-30"}]': finished 2026-03-09T20:23:37.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:36 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-17"}]': finished 2026-03-09T20:23:37.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:36 vm09 ceph-mon[54524]: osdmap e166: 8 total, 8 up, 8 in 2026-03-09T20:23:37.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:36 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:37.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:37 vm05 ceph-mon[61345]: pgmap v196: 300 pgs: 8 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 282 active+clean; 457 KiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-09T20:23:37.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:37 vm05 ceph-mon[61345]: osdmap e167: 8 total, 8 up, 8 in 2026-03-09T20:23:37.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:37 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3513802906' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-94310-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:37.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:37 vm05 ceph-mon[61345]: from='client.49403 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-94310-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:37.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:37 vm05 ceph-mon[51870]: pgmap v196: 300 pgs: 8 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 282 active+clean; 457 KiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-09T20:23:37.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:37 vm05 ceph-mon[51870]: osdmap e167: 8 total, 8 up, 8 in 2026-03-09T20:23:37.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:37 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3513802906' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-94310-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:37.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:37 vm05 ceph-mon[51870]: from='client.49403 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-94310-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:37 vm09 ceph-mon[54524]: pgmap v196: 300 pgs: 8 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 282 active+clean; 457 KiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-09T20:23:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:37 vm09 ceph-mon[54524]: osdmap e167: 8 total, 8 up, 8 in 2026-03-09T20:23:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:37 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3513802906' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-94310-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:37 vm09 ceph-mon[54524]: from='client.49403 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-94310-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:38.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[61345]: from='client.49403 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-94310-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:38.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[61345]: osdmap e168: 8 total, 8 up, 8 in 2026-03-09T20:23:38.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:38.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:38.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:38.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[61345]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:38.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:38.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[61345]: from='client.50089 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]': finished 2026-03-09T20:23:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[51870]: from='client.49403 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-94310-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[51870]: osdmap e168: 8 total, 8 up, 8 in 2026-03-09T20:23:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[51870]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:38 vm05 ceph-mon[51870]: from='client.50089 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]': finished 2026-03-09T20:23:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:23:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:23:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:23:39.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:38 vm09 ceph-mon[54524]: from='client.49403 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-94310-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:39.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:38 vm09 ceph-mon[54524]: osdmap e168: 8 total, 8 up, 8 in 2026-03-09T20:23:39.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:39.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:38 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:39.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:39.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:38 vm09 ceph-mon[54524]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:39.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:38 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:39.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:38 vm09 ceph-mon[54524]: from='client.50089 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-94281-30"}]': finished 2026-03-09T20:23:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:39 vm09 ceph-mon[54524]: pgmap v199: 324 pgs: 64 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 457 KiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:23:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:39 vm09 ceph-mon[54524]: osdmap e169: 8 total, 8 up, 8 in 2026-03-09T20:23:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:39 vm09 ceph-mon[54524]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:39 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:39 vm09 ceph-mon[54524]: from='client.50089 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]': finished 2026-03-09T20:23:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:39 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-19"}]: dispatch 2026-03-09T20:23:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:39 vm09 ceph-mon[54524]: osdmap e170: 8 total, 8 up, 8 in 2026-03-09T20:23:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1465281857' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-94310-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:40.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[61345]: pgmap v199: 324 pgs: 64 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 457 KiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:23:40.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[61345]: osdmap e169: 8 total, 8 up, 8 in 2026-03-09T20:23:40.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:40.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:40.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[61345]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:40.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:40.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[61345]: from='client.50089 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]': finished 2026-03-09T20:23:40.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:40.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-19"}]: dispatch 2026-03-09T20:23:40.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[61345]: osdmap e170: 8 total, 8 up, 8 in 2026-03-09T20:23:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1465281857' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-94310-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[51870]: pgmap v199: 324 pgs: 64 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 457 KiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:23:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[51870]: osdmap e169: 8 total, 8 up, 8 in 2026-03-09T20:23:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4246888350' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[51870]: from='client.50089 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]: dispatch 2026-03-09T20:23:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[51870]: from='client.50089 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-94281-30"}]': finished 2026-03-09T20:23:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-19"}]: dispatch 2026-03-09T20:23:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[51870]: osdmap e170: 8 total, 8 up, 8 in 2026-03-09T20:23:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1465281857' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-94310-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:41.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:41.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-19"}]: dispatch 2026-03-09T20:23:41.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[61345]: from='client.50101 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-94310-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:41.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[61345]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:41.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:41.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[61345]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:41.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-94281-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:41.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[61345]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-94281-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:41.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:41.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-19"}]: dispatch 2026-03-09T20:23:41.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[51870]: from='client.50101 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-94310-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:41.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[51870]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:41.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:41.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[51870]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:41.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-94281-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:41.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:40 vm05 ceph-mon[51870]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-94281-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:41.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:41.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-19"}]: dispatch 2026-03-09T20:23:41.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:40 vm09 ceph-mon[54524]: from='client.50101 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-94310-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:41.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:40 vm09 ceph-mon[54524]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:41.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:41.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:40 vm09 ceph-mon[54524]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:41.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-94281-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:41.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:40 vm09 ceph-mon[54524]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-94281-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:42.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[61345]: pgmap v202: 324 pgs: 64 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 457 KiB data, 677 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:42.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-19"}]': finished 2026-03-09T20:23:42.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[61345]: from='client.50101 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-94310-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:42.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[61345]: from='client.50107 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-94281-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:42.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-19", "mode": "writeback"}]: dispatch 2026-03-09T20:23:42.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-94281-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:42.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[61345]: osdmap e171: 8 total, 8 up, 8 in 2026-03-09T20:23:42.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-19", "mode": "writeback"}]: dispatch 2026-03-09T20:23:42.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[61345]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-94281-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:42.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:23:42.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-19", "mode": "writeback"}]': finished 2026-03-09T20:23:42.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[61345]: osdmap e172: 8 total, 8 up, 8 in 2026-03-09T20:23:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[51870]: pgmap v202: 324 pgs: 64 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 457 KiB data, 677 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-19"}]': finished 2026-03-09T20:23:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[51870]: from='client.50101 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-94310-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[51870]: from='client.50107 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-94281-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-19", "mode": "writeback"}]: dispatch 2026-03-09T20:23:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-94281-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[51870]: osdmap e171: 8 total, 8 up, 8 in 2026-03-09T20:23:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-19", "mode": "writeback"}]: dispatch 2026-03-09T20:23:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[51870]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-94281-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:23:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-19", "mode": "writeback"}]': finished 2026-03-09T20:23:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:41 vm05 ceph-mon[51870]: osdmap e172: 8 total, 8 up, 8 in 2026-03-09T20:23:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:41 vm09 ceph-mon[54524]: pgmap v202: 324 pgs: 64 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 457 KiB data, 677 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:41 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-19"}]': finished 2026-03-09T20:23:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:41 vm09 ceph-mon[54524]: from='client.50101 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-94310-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:41 vm09 ceph-mon[54524]: from='client.50107 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-94281-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-19", "mode": "writeback"}]: dispatch 2026-03-09T20:23:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-94281-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:41 vm09 ceph-mon[54524]: osdmap e171: 8 total, 8 up, 8 in 2026-03-09T20:23:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:41 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-19", "mode": "writeback"}]: dispatch 2026-03-09T20:23:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:41 vm09 ceph-mon[54524]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-94281-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:41 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:23:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:41 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-19", "mode": "writeback"}]': finished 2026-03-09T20:23:42.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:41 vm09 ceph-mon[54524]: osdmap e172: 8 total, 8 up, 8 in 2026-03-09T20:23:43.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:43.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:43.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[61345]: from='client.50107 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-94281-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-94281-31"}]': finished 2026-03-09T20:23:43.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:43.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[61345]: osdmap e173: 8 total, 8 up, 8 in 2026-03-09T20:23:43.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19"}]: dispatch 2026-03-09T20:23:43.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19"}]: dispatch 2026-03-09T20:23:43.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4210015806' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-94310-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:43.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[61345]: from='client.50113 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-94310-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:43.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:43.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:43.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[51870]: from='client.50107 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-94281-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-94281-31"}]': finished 2026-03-09T20:23:43.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:43.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[51870]: osdmap e173: 8 total, 8 up, 8 in 2026-03-09T20:23:43.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19"}]: dispatch 2026-03-09T20:23:43.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19"}]: dispatch 2026-03-09T20:23:43.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4210015806' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-94310-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:43.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:42 vm05 ceph-mon[51870]: from='client.50113 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-94310-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:42 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:42 vm09 ceph-mon[54524]: from='client.50107 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-94281-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-94281-31"}]': finished 2026-03-09T20:23:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:42 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:42 vm09 ceph-mon[54524]: osdmap e173: 8 total, 8 up, 8 in 2026-03-09T20:23:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19"}]: dispatch 2026-03-09T20:23:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:42 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19"}]: dispatch 2026-03-09T20:23:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4210015806' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-94310-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:43.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:42 vm09 ceph-mon[54524]: from='client.50113 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-94310-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:44.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:43 vm05 ceph-mon[61345]: pgmap v205: 292 pgs: 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 282 active+clean; 456 KiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:44.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:43 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:44.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:43 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:44.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:43 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19"}]': finished 2026-03-09T20:23:44.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:43 vm05 ceph-mon[61345]: from='client.50113 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-94310-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:44.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:43 vm05 ceph-mon[61345]: osdmap e174: 8 total, 8 up, 8 in 2026-03-09T20:23:44.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:43 vm05 ceph-mon[51870]: pgmap v205: 292 pgs: 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 282 active+clean; 456 KiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:44.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:43 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:44.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:43 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:43 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19"}]': finished 2026-03-09T20:23:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:43 vm05 ceph-mon[51870]: from='client.50113 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-94310-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:43 vm05 ceph-mon[51870]: osdmap e174: 8 total, 8 up, 8 in 2026-03-09T20:23:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:43 vm09 ceph-mon[54524]: pgmap v205: 292 pgs: 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 282 active+clean; 456 KiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:43 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:43 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:43 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-19"}]': finished 2026-03-09T20:23:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:43 vm09 ceph-mon[54524]: from='client.50113 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-94310-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:44.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:43 vm09 ceph-mon[54524]: osdmap e174: 8 total, 8 up, 8 in 2026-03-09T20:23:45.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:44 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:45.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:44 vm05 ceph-mon[61345]: osdmap e175: 8 total, 8 up, 8 in 2026-03-09T20:23:45.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:45.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:44 vm05 ceph-mon[61345]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:45.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:44 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:45.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:44 vm05 ceph-mon[51870]: osdmap e175: 8 total, 8 up, 8 in 2026-03-09T20:23:45.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:45.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:44 vm05 ceph-mon[51870]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:45.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:44 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:45.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:44 vm09 ceph-mon[54524]: osdmap e175: 8 total, 8 up, 8 in 2026-03-09T20:23:45.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:45.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:44 vm09 ceph-mon[54524]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:45.772 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:23:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:23:46.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:45 vm05 ceph-mon[61345]: pgmap v208: 332 pgs: 40 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 282 active+clean; 456 KiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:46.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:45 vm05 ceph-mon[61345]: from='client.50107 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]': finished 2026-03-09T20:23:46.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:45 vm05 ceph-mon[61345]: osdmap e176: 8 total, 8 up, 8 in 2026-03-09T20:23:46.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:46.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:45 vm05 ceph-mon[61345]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:46.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:45 vm05 ceph-mon[51870]: pgmap v208: 332 pgs: 40 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 282 active+clean; 456 KiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:46.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:45 vm05 ceph-mon[51870]: from='client.50107 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]': finished 2026-03-09T20:23:46.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:45 vm05 ceph-mon[51870]: osdmap e176: 8 total, 8 up, 8 in 2026-03-09T20:23:46.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:46.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:45 vm05 ceph-mon[51870]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:46.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:45 vm09 ceph-mon[54524]: pgmap v208: 332 pgs: 40 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 282 active+clean; 456 KiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:23:46.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:45 vm09 ceph-mon[54524]: from='client.50107 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-94281-31"}]': finished 2026-03-09T20:23:46.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:45 vm09 ceph-mon[54524]: osdmap e176: 8 total, 8 up, 8 in 2026-03-09T20:23:46.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2434591765' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:46.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:45 vm09 ceph-mon[54524]: from='client.50107 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]: dispatch 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/55344250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-94310-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[61345]: from='client.49421 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-94310-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[61345]: from='client.50107 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]': finished 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[61345]: from='client.49421 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-94310-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[61345]: osdmap e177: 8 total, 8 up, 8 in 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[61345]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/55344250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-94310-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:47.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[51870]: from='client.49421 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-94310-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:47.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[51870]: from='client.50107 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]': finished 2026-03-09T20:23:47.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:47.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[51870]: from='client.49421 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-94310-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:47.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[51870]: osdmap e177: 8 total, 8 up, 8 in 2026-03-09T20:23:47.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:47.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:46 vm05 ceph-mon[51870]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:46 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/55344250' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-94310-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:46 vm09 ceph-mon[54524]: from='client.49421 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-94310-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:46 vm09 ceph-mon[54524]: from='client.50107 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-94281-31"}]': finished 2026-03-09T20:23:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:46 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:46 vm09 ceph-mon[54524]: from='client.49421 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-94310-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:46 vm09 ceph-mon[54524]: osdmap e177: 8 total, 8 up, 8 in 2026-03-09T20:23:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:46 vm09 ceph-mon[54524]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:48.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: pgmap v211: 324 pgs: 64 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 456 KiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-94281-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-94281-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/55344250' entity='client.admin' cmd=[{ 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: "prefix": "osd pool set", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: "pool": "PoolEIOFlag_vm05-94310-33", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: "var": "eio", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: "val": "true" 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: }]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.49421 ' entity='client.admin' cmd=[{ 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: "prefix": "osd pool set", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: "pool": "PoolEIOFlag_vm05-94310-33", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: "var": "eio", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: "val": "true" 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: }]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.49427 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-94281-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.49421 ' entity='client.admin' cmd='[{ 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: "prefix": "osd pool set", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: "pool": "PoolEIOFlag_vm05-94310-33", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: "var": "eio", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: "val": "true" 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: }]': finished 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: osdmap e178: 8 total, 8 up, 8 in 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-21"}]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-21"}]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm05-94281-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[61345]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm05-94281-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: pgmap v211: 324 pgs: 64 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 456 KiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-94281-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-94281-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/55344250' entity='client.admin' cmd=[{ 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: "prefix": "osd pool set", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: "pool": "PoolEIOFlag_vm05-94310-33", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: "var": "eio", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: "val": "true" 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: }]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.49421 ' entity='client.admin' cmd=[{ 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: "prefix": "osd pool set", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: "pool": "PoolEIOFlag_vm05-94310-33", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: "var": "eio", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: "val": "true" 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: }]: dispatch 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.49427 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-94281-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.49421 ' entity='client.admin' cmd='[{ 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: "prefix": "osd pool set", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: "pool": "PoolEIOFlag_vm05-94310-33", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: "var": "eio", 2026-03-09T20:23:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: "val": "true" 2026-03-09T20:23:48.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: }]': finished 2026-03-09T20:23:48.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: osdmap e178: 8 total, 8 up, 8 in 2026-03-09T20:23:48.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-21"}]: dispatch 2026-03-09T20:23:48.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-21"}]: dispatch 2026-03-09T20:23:48.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm05-94281-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:48.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:47 vm05 ceph-mon[51870]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm05-94281-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: pgmap v211: 324 pgs: 64 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 456 KiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:23:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-94281-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-94281-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/55344250' entity='client.admin' cmd=[{ 2026-03-09T20:23:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: "prefix": "osd pool set", 2026-03-09T20:23:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: "pool": "PoolEIOFlag_vm05-94310-33", 2026-03-09T20:23:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: "var": "eio", 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: "val": "true" 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: }]: dispatch 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.49421 ' entity='client.admin' cmd=[{ 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: "prefix": "osd pool set", 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: "pool": "PoolEIOFlag_vm05-94310-33", 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: "var": "eio", 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: "val": "true" 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: }]: dispatch 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.49427 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-94281-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.49421 ' entity='client.admin' cmd='[{ 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: "prefix": "osd pool set", 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: "pool": "PoolEIOFlag_vm05-94310-33", 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: "var": "eio", 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: "val": "true" 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: }]': finished 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: osdmap e178: 8 total, 8 up, 8 in 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-21"}]: dispatch 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-21"}]: dispatch 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm05-94281-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:47 vm09 ceph-mon[54524]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm05-94281-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:48.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:23:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:23:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:23:50.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[61345]: pgmap v214: 324 pgs: 64 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 456 KiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:23:50.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-21"}]': finished 2026-03-09T20:23:50.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-21", "mode": "writeback"}]: dispatch 2026-03-09T20:23:50.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[61345]: osdmap e179: 8 total, 8 up, 8 in 2026-03-09T20:23:50.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-21", "mode": "writeback"}]: dispatch 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[51870]: pgmap v214: 324 pgs: 64 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 456 KiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-21"}]': finished 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-21", "mode": "writeback"}]: dispatch 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[51870]: osdmap e179: 8 total, 8 up, 8 in 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-21", "mode": "writeback"}]: dispatch 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:49 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:23:50.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:49 vm09 ceph-mon[54524]: pgmap v214: 324 pgs: 64 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 456 KiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:23:50.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:49 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-21"}]': finished 2026-03-09T20:23:50.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-21", "mode": "writeback"}]: dispatch 2026-03-09T20:23:50.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:49 vm09 ceph-mon[54524]: osdmap e179: 8 total, 8 up, 8 in 2026-03-09T20:23:50.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:49 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-21", "mode": "writeback"}]: dispatch 2026-03-09T20:23:50.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:49 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:50.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:49 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:23:50.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:49 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:50.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:49 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:50.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:49 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:23:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:23:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[61345]: from='client.49427 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm05-94281-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-94281-32"}]': finished 2026-03-09T20:23:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-21", "mode": "writeback"}]': finished 2026-03-09T20:23:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[61345]: osdmap e180: 8 total, 8 up, 8 in 2026-03-09T20:23:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2681799123' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-94310-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[61345]: from='client.50125 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-94310-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:23:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[51870]: from='client.49427 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm05-94281-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-94281-32"}]': finished 2026-03-09T20:23:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-21", "mode": "writeback"}]': finished 2026-03-09T20:23:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[51870]: osdmap e180: 8 total, 8 up, 8 in 2026-03-09T20:23:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2681799123' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-94310-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[51870]: from='client.50125 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-94310-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:50 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:51.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:50 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:23:51.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:50 vm09 ceph-mon[54524]: from='client.49427 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm05-94281-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-94281-32"}]': finished 2026-03-09T20:23:51.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:50 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-21", "mode": "writeback"}]': finished 2026-03-09T20:23:51.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:50 vm09 ceph-mon[54524]: osdmap e180: 8 total, 8 up, 8 in 2026-03-09T20:23:51.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2681799123' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-94310-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:51.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:50 vm09 ceph-mon[54524]: from='client.50125 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-94310-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:51.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:51.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:50 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:23:52.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:51 vm05 ceph-mon[61345]: pgmap v217: 332 pgs: 72 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 456 KiB data, 682 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:52.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:51 vm05 ceph-mon[61345]: from='client.50125 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-94310-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:51 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21"}]: dispatch 2026-03-09T20:23:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:51 vm05 ceph-mon[61345]: osdmap e181: 8 total, 8 up, 8 in 2026-03-09T20:23:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:51 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21"}]: dispatch 2026-03-09T20:23:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:51 vm05 ceph-mon[51870]: pgmap v217: 332 pgs: 72 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 456 KiB data, 682 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:51 vm05 ceph-mon[51870]: from='client.50125 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-94310-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:51 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21"}]: dispatch 2026-03-09T20:23:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:51 vm05 ceph-mon[51870]: osdmap e181: 8 total, 8 up, 8 in 2026-03-09T20:23:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:51 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21"}]: dispatch 2026-03-09T20:23:52.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:51 vm09 ceph-mon[54524]: pgmap v217: 332 pgs: 72 unknown, 4 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 250 active+clean; 456 KiB data, 682 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:23:52.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:51 vm09 ceph-mon[54524]: from='client.50125 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-94310-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:52.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:51 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:23:52.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21"}]: dispatch 2026-03-09T20:23:52.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:51 vm09 ceph-mon[54524]: osdmap e181: 8 total, 8 up, 8 in 2026-03-09T20:23:52.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:51 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21"}]: dispatch 2026-03-09T20:23:53.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:52 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:53.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:52 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21"}]': finished 2026-03-09T20:23:53.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:52 vm05 ceph-mon[61345]: osdmap e182: 8 total, 8 up, 8 in 2026-03-09T20:23:53.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:53.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:52 vm05 ceph-mon[61345]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:52 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:52 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21"}]': finished 2026-03-09T20:23:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:52 vm05 ceph-mon[51870]: osdmap e182: 8 total, 8 up, 8 in 2026-03-09T20:23:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:52 vm05 ceph-mon[51870]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:53.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:52 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:23:53.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:52 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-21"}]': finished 2026-03-09T20:23:53.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:52 vm09 ceph-mon[54524]: osdmap e182: 8 total, 8 up, 8 in 2026-03-09T20:23:53.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:53.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:52 vm09 ceph-mon[54524]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:54.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:53 vm09 ceph-mon[54524]: pgmap v220: 292 pgs: 3 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 284 active+clean; 456 KiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T20:23:54.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:53 vm09 ceph-mon[54524]: from='client.49427 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]': finished 2026-03-09T20:23:54.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:53 vm09 ceph-mon[54524]: osdmap e183: 8 total, 8 up, 8 in 2026-03-09T20:23:54.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:54.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:53 vm09 ceph-mon[54524]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:54.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3132996622' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-94310-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:54.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:53 vm09 ceph-mon[54524]: from='client.50131 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-94310-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:54.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[51870]: pgmap v220: 292 pgs: 3 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 284 active+clean; 456 KiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T20:23:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[51870]: from='client.49427 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]': finished 2026-03-09T20:23:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[51870]: osdmap e183: 8 total, 8 up, 8 in 2026-03-09T20:23:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[51870]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3132996622' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-94310-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[51870]: from='client.50131 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-94310-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[61345]: pgmap v220: 292 pgs: 3 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 284 active+clean; 456 KiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T20:23:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[61345]: from='client.49427 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-94281-32"}]': finished 2026-03-09T20:23:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[61345]: osdmap e183: 8 total, 8 up, 8 in 2026-03-09T20:23:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2262793125' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[61345]: from='client.49427 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]: dispatch 2026-03-09T20:23:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3132996622' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-94310-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:53 vm05 ceph-mon[61345]: from='client.50131 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-94310-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:55.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:55 vm09 ceph-mon[54524]: from='client.49427 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]': finished 2026-03-09T20:23:55.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:55 vm09 ceph-mon[54524]: from='client.50131 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-94310-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:55.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:55 vm09 ceph-mon[54524]: osdmap e184: 8 total, 8 up, 8 in 2026-03-09T20:23:55.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:55.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:55 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:55.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:55.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:55 vm09 ceph-mon[54524]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:55.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:55.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:55 vm09 ceph-mon[54524]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:55.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-94281-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:55.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:55 vm09 ceph-mon[54524]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-94281-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:55.344 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:55 vm09 ceph-mon[54524]: pgmap v223: 324 pgs: 64 unknown, 3 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 252 active+clean; 456 KiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T20:23:55.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[51870]: from='client.49427 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]': finished 2026-03-09T20:23:55.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[51870]: from='client.50131 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-94310-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:55.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[51870]: osdmap e184: 8 total, 8 up, 8 in 2026-03-09T20:23:55.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:55.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:55.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:55.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[51870]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:55.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:55.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[51870]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:55.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-94281-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:55.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[51870]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-94281-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:55.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[51870]: pgmap v223: 324 pgs: 64 unknown, 3 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 252 active+clean; 456 KiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T20:23:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[61345]: from='client.49427 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-94281-32"}]': finished 2026-03-09T20:23:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[61345]: from='client.50131 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-94310-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[61345]: osdmap e184: 8 total, 8 up, 8 in 2026-03-09T20:23:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[61345]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[61345]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-94281-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[61345]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-94281-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:23:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:55 vm05 ceph-mon[61345]: pgmap v223: 324 pgs: 64 unknown, 3 active+clean+snaptrim_wait, 5 active+clean+snaptrim, 252 active+clean; 456 KiB data, 691 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T20:23:55.772 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:23:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:23:56.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:56.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:56.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[51870]: from='client.49439 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-94281-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:56.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm05-94281-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:56.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[51870]: osdmap e185: 8 total, 8 up, 8 in 2026-03-09T20:23:56.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[51870]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm05-94281-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:56.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[51870]: osdmap e186: 8 total, 8 up, 8 in 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2942271976' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[61345]: from='client.49439 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-94281-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm05-94281-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[61345]: osdmap e185: 8 total, 8 up, 8 in 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[61345]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm05-94281-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[61345]: osdmap e186: 8 total, 8 up, 8 in 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2942271976' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:56 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:56 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:23:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:56 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:56 vm09 ceph-mon[54524]: from='client.49439 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-94281-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:23:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm05-94281-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:56 vm09 ceph-mon[54524]: osdmap e185: 8 total, 8 up, 8 in 2026-03-09T20:23:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:56 vm09 ceph-mon[54524]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm05-94281-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:23:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:23:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:56 vm09 ceph-mon[54524]: osdmap e186: 8 total, 8 up, 8 in 2026-03-09T20:23:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2942271976' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:56 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:23:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[61345]: pgmap v226: 324 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 285 active+clean; 456 KiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:23:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[61345]: from='client.49439 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm05-94281-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-94281-33"}]': finished 2026-03-09T20:23:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2942271976' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-23"}]: dispatch 2026-03-09T20:23:57.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[61345]: osdmap e187: 8 total, 8 up, 8 in 2026-03-09T20:23:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-23"}]: dispatch 2026-03-09T20:23:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[51870]: pgmap v226: 324 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 285 active+clean; 456 KiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:23:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[51870]: from='client.49439 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm05-94281-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-94281-33"}]': finished 2026-03-09T20:23:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2942271976' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-23"}]: dispatch 2026-03-09T20:23:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[51870]: osdmap e187: 8 total, 8 up, 8 in 2026-03-09T20:23:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:57 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-23"}]: dispatch 2026-03-09T20:23:57.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:57 vm09 ceph-mon[54524]: pgmap v226: 324 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 285 active+clean; 456 KiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:23:57.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:57 vm09 ceph-mon[54524]: from='client.49439 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm05-94281-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-94281-33"}]': finished 2026-03-09T20:23:57.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2942271976' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:23:57.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:57 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:23:57.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-23"}]: dispatch 2026-03-09T20:23:57.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:57 vm09 ceph-mon[54524]: osdmap e187: 8 total, 8 up, 8 in 2026-03-09T20:23:57.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:57 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-23"}]: dispatch 2026-03-09T20:23:58.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:23:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:23:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:23:59.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-23"}]': finished 2026-03-09T20:23:59.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[61345]: osdmap e188: 8 total, 8 up, 8 in 2026-03-09T20:23:59.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-23", "mode": "writeback"}]: dispatch 2026-03-09T20:23:59.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-23", "mode": "writeback"}]: dispatch 2026-03-09T20:23:59.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4176997070' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:59.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[61345]: from='client.50134 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:59.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[61345]: pgmap v229: 364 pgs: 72 unknown, 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 285 active+clean; 456 KiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T20:23:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-23"}]': finished 2026-03-09T20:23:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[51870]: osdmap e188: 8 total, 8 up, 8 in 2026-03-09T20:23:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-23", "mode": "writeback"}]: dispatch 2026-03-09T20:23:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-23", "mode": "writeback"}]: dispatch 2026-03-09T20:23:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4176997070' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[51870]: from='client.50134 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:23:59 vm05 ceph-mon[51870]: pgmap v229: 364 pgs: 72 unknown, 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 285 active+clean; 456 KiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T20:23:59.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:59 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-23"}]': finished 2026-03-09T20:23:59.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:59 vm09 ceph-mon[54524]: osdmap e188: 8 total, 8 up, 8 in 2026-03-09T20:23:59.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-23", "mode": "writeback"}]: dispatch 2026-03-09T20:23:59.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:59 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-23", "mode": "writeback"}]: dispatch 2026-03-09T20:23:59.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4176997070' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:59.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:59 vm09 ceph-mon[54524]: from='client.50134 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:23:59.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:23:59 vm09 ceph-mon[54524]: pgmap v229: 364 pgs: 72 unknown, 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 285 active+clean; 456 KiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T20:24:00.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:00.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-23", "mode": "writeback"}]': finished 2026-03-09T20:24:00.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[61345]: from='client.50134 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:00.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[61345]: osdmap e189: 8 total, 8 up, 8 in 2026-03-09T20:24:00.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:24:00.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[61345]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:24:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-23", "mode": "writeback"}]': finished 2026-03-09T20:24:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[51870]: from='client.50134 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[51870]: osdmap e189: 8 total, 8 up, 8 in 2026-03-09T20:24:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:24:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[51870]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:24:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:00.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:00 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:00.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:00 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-23", "mode": "writeback"}]': finished 2026-03-09T20:24:00.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:00 vm09 ceph-mon[54524]: from='client.50134 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:00.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:00 vm09 ceph-mon[54524]: osdmap e189: 8 total, 8 up, 8 in 2026-03-09T20:24:00.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:24:00.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:00 vm09 ceph-mon[54524]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:24:00.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:00 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:00.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:01.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[61345]: from='client.49439 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]': finished 2026-03-09T20:24:01.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[61345]: osdmap e190: 8 total, 8 up, 8 in 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[61345]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2863597288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[61345]: from='client.50140 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[61345]: pgmap v232: 388 pgs: 96 unknown, 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 285 active+clean; 456 KiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[51870]: from='client.49439 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]': finished 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[51870]: osdmap e190: 8 total, 8 up, 8 in 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[51870]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2863597288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[51870]: from='client.50140 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[51870]: pgmap v232: 388 pgs: 96 unknown, 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 285 active+clean; 456 KiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:01 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:01.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:01 vm09 ceph-mon[54524]: from='client.49439 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-94281-33"}]': finished 2026-03-09T20:24:01.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:01 vm09 ceph-mon[54524]: osdmap e190: 8 total, 8 up, 8 in 2026-03-09T20:24:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/150851681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:24:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:01 vm09 ceph-mon[54524]: from='client.49439 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]: dispatch 2026-03-09T20:24:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2863597288' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:01 vm09 ceph-mon[54524]: from='client.50140 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:01 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:01 vm09 ceph-mon[54524]: pgmap v232: 388 pgs: 96 unknown, 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 285 active+clean; 456 KiB data, 692 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:01 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[61345]: from='client.49439 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]': finished 2026-03-09T20:24:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[61345]: from='client.50140 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23"}]: dispatch 2026-03-09T20:24:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[61345]: osdmap e191: 8 total, 8 up, 8 in 2026-03-09T20:24:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23"}]: dispatch 2026-03-09T20:24:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[61345]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[61345]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-94281-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[61345]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-94281-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[51870]: from='client.49439 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]': finished 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[51870]: from='client.50140 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23"}]: dispatch 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[51870]: osdmap e191: 8 total, 8 up, 8 in 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23"}]: dispatch 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[51870]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[51870]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-94281-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:02 vm05 ceph-mon[51870]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-94281-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:02.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:02 vm09 ceph-mon[54524]: from='client.49439 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-94281-33"}]': finished 2026-03-09T20:24:02.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:02 vm09 ceph-mon[54524]: from='client.50140 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-94310-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:02.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:02 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:02.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23"}]: dispatch 2026-03-09T20:24:02.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:02 vm09 ceph-mon[54524]: osdmap e191: 8 total, 8 up, 8 in 2026-03-09T20:24:02.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:02 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23"}]: dispatch 2026-03-09T20:24:02.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:02.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:02 vm09 ceph-mon[54524]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:02.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:02.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:02 vm09 ceph-mon[54524]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:02.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-94281-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:02.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:02 vm09 ceph-mon[54524]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-94281-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23"}]': finished 2026-03-09T20:24:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[61345]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-94281-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[61345]: osdmap e192: 8 total, 8 up, 8 in 2026-03-09T20:24:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-94281-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[61345]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-94281-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[61345]: pgmap v235: 356 pgs: 7 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 345 active+clean; 456 KiB data, 696 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 6 op/s 2026-03-09T20:24:03.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:03.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23"}]': finished 2026-03-09T20:24:03.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[51870]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-94281-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[51870]: osdmap e192: 8 total, 8 up, 8 in 2026-03-09T20:24:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-94281-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[51870]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-94281-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:03 vm05 ceph-mon[51870]: pgmap v235: 356 pgs: 7 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 345 active+clean; 456 KiB data, 696 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 6 op/s 2026-03-09T20:24:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:03 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:03 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-23"}]': finished 2026-03-09T20:24:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:03 vm09 ceph-mon[54524]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-94281-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:03 vm09 ceph-mon[54524]: osdmap e192: 8 total, 8 up, 8 in 2026-03-09T20:24:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-94281-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:03 vm09 ceph-mon[54524]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-94281-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:03 vm09 ceph-mon[54524]: pgmap v235: 356 pgs: 7 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 345 active+clean; 456 KiB data, 696 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 6 op/s 2026-03-09T20:24:04.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:04 vm05 ceph-mon[51870]: osdmap e193: 8 total, 8 up, 8 in 2026-03-09T20:24:04.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:04 vm05 ceph-mon[61345]: osdmap e193: 8 total, 8 up, 8 in 2026-03-09T20:24:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:04 vm09 ceph-mon[54524]: osdmap e193: 8 total, 8 up, 8 in 2026-03-09T20:24:05.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:05 vm05 ceph-mon[61345]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-94281-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-94281-34"}]': finished 2026-03-09T20:24:05.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:05 vm05 ceph-mon[61345]: osdmap e194: 8 total, 8 up, 8 in 2026-03-09T20:24:05.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:05.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:05 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:05.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:05 vm05 ceph-mon[61345]: pgmap v238: 300 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 253 active+clean; 456 KiB data, 696 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:24:05.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:05 vm05 ceph-mon[51870]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-94281-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-94281-34"}]': finished 2026-03-09T20:24:05.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:05 vm05 ceph-mon[51870]: osdmap e194: 8 total, 8 up, 8 in 2026-03-09T20:24:05.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:05.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:05 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:05.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:05 vm05 ceph-mon[51870]: pgmap v238: 300 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 253 active+clean; 456 KiB data, 696 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:24:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:05 vm09 ceph-mon[54524]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-94281-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-94281-34"}]': finished 2026-03-09T20:24:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:05 vm09 ceph-mon[54524]: osdmap e194: 8 total, 8 up, 8 in 2026-03-09T20:24:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:05 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:05.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:05 vm09 ceph-mon[54524]: pgmap v238: 300 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 253 active+clean; 456 KiB data, 696 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:24:05.522 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:24:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:24:06.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:06.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[61345]: osdmap e195: 8 total, 8 up, 8 in 2026-03-09T20:24:06.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3678006927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3678006927' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[61345]: osdmap e196: 8 total, 8 up, 8 in 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[61345]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[51870]: osdmap e195: 8 total, 8 up, 8 in 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3678006927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3678006927' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[51870]: osdmap e196: 8 total, 8 up, 8 in 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[51870]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:06 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:06 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:06 vm09 ceph-mon[54524]: osdmap e195: 8 total, 8 up, 8 in 2026-03-09T20:24:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3678006927' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3678006927' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:06 vm09 ceph-mon[54524]: osdmap e196: 8 total, 8 up, 8 in 2026-03-09T20:24:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:06 vm09 ceph-mon[54524]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:06 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:07.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[61345]: pgmap v241: 324 pgs: 32 creating+peering, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 456 KiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:07.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[61345]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]': finished 2026-03-09T20:24:07.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:07.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-25"}]: dispatch 2026-03-09T20:24:07.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:07.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[61345]: osdmap e197: 8 total, 8 up, 8 in 2026-03-09T20:24:07.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-25"}]: dispatch 2026-03-09T20:24:07.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[61345]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:07.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1570336420' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:07.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[51870]: pgmap v241: 324 pgs: 32 creating+peering, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 456 KiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[51870]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]': finished 2026-03-09T20:24:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-25"}]: dispatch 2026-03-09T20:24:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[51870]: osdmap e197: 8 total, 8 up, 8 in 2026-03-09T20:24:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-25"}]: dispatch 2026-03-09T20:24:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[51870]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1570336420' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:07 vm09 ceph-mon[54524]: pgmap v241: 324 pgs: 32 creating+peering, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 456 KiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:07 vm09 ceph-mon[54524]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-94281-34"}]': finished 2026-03-09T20:24:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:07 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-25"}]: dispatch 2026-03-09T20:24:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2178041329' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:07 vm09 ceph-mon[54524]: osdmap e197: 8 total, 8 up, 8 in 2026-03-09T20:24:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:07 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-25"}]: dispatch 2026-03-09T20:24:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:07 vm09 ceph-mon[54524]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]: dispatch 2026-03-09T20:24:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1570336420' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:08.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-25"}]': finished 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[51870]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]': finished 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1570336420' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-25", "mode": "writeback"}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[51870]: osdmap e198: 8 total, 8 up, 8 in 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-25", "mode": "writeback"}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[51870]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[51870]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-94281-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[51870]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-94281-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-25"}]': finished 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[61345]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]': finished 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1570336420' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-25", "mode": "writeback"}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[61345]: osdmap e198: 8 total, 8 up, 8 in 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-25", "mode": "writeback"}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[61345]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[61345]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-94281-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:08 vm05 ceph-mon[61345]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-94281-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:08 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-25"}]': finished 2026-03-09T20:24:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:08 vm09 ceph-mon[54524]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-94281-34"}]': finished 2026-03-09T20:24:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1570336420' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-25", "mode": "writeback"}]: dispatch 2026-03-09T20:24:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:08 vm09 ceph-mon[54524]: osdmap e198: 8 total, 8 up, 8 in 2026-03-09T20:24:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-25", "mode": "writeback"}]: dispatch 2026-03-09T20:24:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:08 vm09 ceph-mon[54524]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:08 vm09 ceph-mon[54524]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-94281-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:08 vm09 ceph-mon[54524]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-94281-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:08.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:24:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:24:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:24:09.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[61345]: pgmap v244: 356 pgs: 32 unknown, 32 creating+peering, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 456 KiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:24:09.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:09.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-25", "mode": "writeback"}]': finished 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[61345]: from='client.50161 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-94281-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm05-94281-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[61345]: osdmap e199: 8 total, 8 up, 8 in 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[61345]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm05-94281-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2891904502' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[51870]: pgmap v244: 356 pgs: 32 unknown, 32 creating+peering, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 456 KiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-25", "mode": "writeback"}]': finished 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[51870]: from='client.50161 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-94281-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm05-94281-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[51870]: osdmap e199: 8 total, 8 up, 8 in 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[51870]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm05-94281-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2891904502' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:09 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:09 vm09 ceph-mon[54524]: pgmap v244: 356 pgs: 32 unknown, 32 creating+peering, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 456 KiB data, 697 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:24:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:09 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:09 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-25", "mode": "writeback"}]': finished 2026-03-09T20:24:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:09 vm09 ceph-mon[54524]: from='client.50161 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-94281-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm05-94281-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:09 vm09 ceph-mon[54524]: osdmap e199: 8 total, 8 up, 8 in 2026-03-09T20:24:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:09 vm09 ceph-mon[54524]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm05-94281-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2891904502' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:09 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:11.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2891904502' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:11.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:11 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:11.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25"}]: dispatch 2026-03-09T20:24:11.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:11 vm05 ceph-mon[61345]: osdmap e200: 8 total, 8 up, 8 in 2026-03-09T20:24:11.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:11 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25"}]: dispatch 2026-03-09T20:24:11.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:11 vm05 ceph-mon[61345]: pgmap v247: 388 pgs: 64 unknown, 32 creating+peering, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 456 KiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:11.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2891904502' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:11.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:11 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25"}]: dispatch 2026-03-09T20:24:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:11 vm05 ceph-mon[51870]: osdmap e200: 8 total, 8 up, 8 in 2026-03-09T20:24:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:11 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25"}]: dispatch 2026-03-09T20:24:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:11 vm05 ceph-mon[51870]: pgmap v247: 388 pgs: 64 unknown, 32 creating+peering, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 456 KiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2891904502' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:11 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25"}]: dispatch 2026-03-09T20:24:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:11 vm09 ceph-mon[54524]: osdmap e200: 8 total, 8 up, 8 in 2026-03-09T20:24:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:11 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25"}]: dispatch 2026-03-09T20:24:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:11 vm09 ceph-mon[54524]: pgmap v247: 388 pgs: 64 unknown, 32 creating+peering, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 456 KiB data, 697 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:12.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:12 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:12.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:12 vm05 ceph-mon[61345]: from='client.50161 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm05-94281-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-94281-35"}]': finished 2026-03-09T20:24:12.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:12 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25"}]': finished 2026-03-09T20:24:12.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:12 vm05 ceph-mon[61345]: osdmap e201: 8 total, 8 up, 8 in 2026-03-09T20:24:12.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/351644060' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:12 vm05 ceph-mon[61345]: from='client.50173 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:12 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:12 vm05 ceph-mon[51870]: from='client.50161 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm05-94281-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-94281-35"}]': finished 2026-03-09T20:24:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:12 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25"}]': finished 2026-03-09T20:24:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:12 vm05 ceph-mon[51870]: osdmap e201: 8 total, 8 up, 8 in 2026-03-09T20:24:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/351644060' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:12 vm05 ceph-mon[51870]: from='client.50173 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:12.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:12 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:12.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:12 vm09 ceph-mon[54524]: from='client.50161 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm05-94281-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-94281-35"}]': finished 2026-03-09T20:24:12.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:12 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-25"}]': finished 2026-03-09T20:24:12.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:12 vm09 ceph-mon[54524]: osdmap e201: 8 total, 8 up, 8 in 2026-03-09T20:24:12.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/351644060' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:12.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:12 vm09 ceph-mon[54524]: from='client.50173 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:13.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[61345]: from='client.50173 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:13.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[61345]: osdmap e202: 8 total, 8 up, 8 in 2026-03-09T20:24:13.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[61345]: pgmap v250: 396 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 350 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:24:13.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:13.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:13.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[61345]: osdmap e203: 8 total, 8 up, 8 in 2026-03-09T20:24:13.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/343404658' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:13.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[61345]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:13.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[61345]: from='client.49469 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:13.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:13.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[51870]: from='client.50173 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[51870]: osdmap e202: 8 total, 8 up, 8 in 2026-03-09T20:24:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[51870]: pgmap v250: 396 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 350 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:24:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[51870]: osdmap e203: 8 total, 8 up, 8 in 2026-03-09T20:24:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/343404658' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[51870]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[51870]: from='client.49469 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:13 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:13 vm09 ceph-mon[54524]: from='client.50173 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:13 vm09 ceph-mon[54524]: osdmap e202: 8 total, 8 up, 8 in 2026-03-09T20:24:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:13 vm09 ceph-mon[54524]: pgmap v250: 396 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 350 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:24:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:13 vm09 ceph-mon[54524]: osdmap e203: 8 total, 8 up, 8 in 2026-03-09T20:24:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/343404658' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:13 vm09 ceph-mon[54524]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:13 vm09 ceph-mon[54524]: from='client.49469 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:13 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:14.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:14.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[61345]: from='client.50161 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]': finished 2026-03-09T20:24:14.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[61345]: from='client.49469 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:14.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:14.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:14.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[61345]: osdmap e204: 8 total, 8 up, 8 in 2026-03-09T20:24:14.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[61345]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[51870]: from='client.50161 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]': finished 2026-03-09T20:24:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[51870]: from='client.49469 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[51870]: osdmap e204: 8 total, 8 up, 8 in 2026-03-09T20:24:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:14 vm05 ceph-mon[51870]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:14.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:14 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:14.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:14 vm09 ceph-mon[54524]: from='client.50161 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-94281-35"}]': finished 2026-03-09T20:24:14.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:14 vm09 ceph-mon[54524]: from='client.49469 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-94310-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:14.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:14 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:14.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:14 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3755678663' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:14.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:14 vm09 ceph-mon[54524]: osdmap e204: 8 total, 8 up, 8 in 2026-03-09T20:24:14.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:14 vm09 ceph-mon[54524]: from='client.50161 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]: dispatch 2026-03-09T20:24:15.613 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:24:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:24:15.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[61345]: pgmap v253: 452 pgs: 96 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 350 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:24:15.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[61345]: from='client.50161 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]': finished 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[61345]: osdmap e205: 8 total, 8 up, 8 in 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[61345]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[61345]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-94281-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[61345]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-94281-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[51870]: pgmap v253: 452 pgs: 96 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 350 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[51870]: from='client.50161 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]': finished 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[51870]: osdmap e205: 8 total, 8 up, 8 in 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[51870]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[51870]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-94281-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[51870]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-94281-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:15 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:16.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:15 vm09 ceph-mon[54524]: pgmap v253: 452 pgs: 96 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 350 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:24:16.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:15 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:16.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:16.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:15 vm09 ceph-mon[54524]: from='client.50161 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm05-94281-35"}]': finished 2026-03-09T20:24:16.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:15 vm09 ceph-mon[54524]: osdmap e205: 8 total, 8 up, 8 in 2026-03-09T20:24:16.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:16.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:15 vm09 ceph-mon[54524]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:16.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:16.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:15 vm09 ceph-mon[54524]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:16.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-94281-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:16.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:15 vm09 ceph-mon[54524]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-94281-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:15 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:16.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:16.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[61345]: from='client.50185 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-94281-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:16.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:16.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[61345]: osdmap e206: 8 total, 8 up, 8 in 2026-03-09T20:24:16.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-27"}]: dispatch 2026-03-09T20:24:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-94281-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-27"}]: dispatch 2026-03-09T20:24:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[61345]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-94281-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[51870]: from='client.50185 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-94281-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[51870]: osdmap e206: 8 total, 8 up, 8 in 2026-03-09T20:24:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-27"}]: dispatch 2026-03-09T20:24:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-94281-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-27"}]: dispatch 2026-03-09T20:24:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:16 vm05 ceph-mon[51870]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-94281-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:16 vm09 ceph-mon[54524]: from='client.50185 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-94281-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:16 vm09 ceph-mon[54524]: osdmap e206: 8 total, 8 up, 8 in 2026-03-09T20:24:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-27"}]: dispatch 2026-03-09T20:24:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-94281-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-27"}]: dispatch 2026-03-09T20:24:17.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:16 vm09 ceph-mon[54524]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-94281-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:17.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:17 vm05 ceph-mon[61345]: pgmap v256: 388 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 382 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:24:17.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:17 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-27"}]': finished 2026-03-09T20:24:17.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-27", "mode": "writeback"}]: dispatch 2026-03-09T20:24:17.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:17 vm05 ceph-mon[61345]: osdmap e207: 8 total, 8 up, 8 in 2026-03-09T20:24:17.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:17 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-27", "mode": "writeback"}]: dispatch 2026-03-09T20:24:17.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:17 vm05 ceph-mon[51870]: pgmap v256: 388 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 382 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:24:17.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:17 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-27"}]': finished 2026-03-09T20:24:17.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-27", "mode": "writeback"}]: dispatch 2026-03-09T20:24:17.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:17 vm05 ceph-mon[51870]: osdmap e207: 8 total, 8 up, 8 in 2026-03-09T20:24:17.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:17 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-27", "mode": "writeback"}]: dispatch 2026-03-09T20:24:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:17 vm09 ceph-mon[54524]: pgmap v256: 388 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 382 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:24:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:17 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-27"}]': finished 2026-03-09T20:24:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-27", "mode": "writeback"}]: dispatch 2026-03-09T20:24:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:17 vm09 ceph-mon[54524]: osdmap e207: 8 total, 8 up, 8 in 2026-03-09T20:24:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:17 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-27", "mode": "writeback"}]: dispatch 2026-03-09T20:24:18.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:18 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:18.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:18 vm05 ceph-mon[61345]: from='client.50185 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-94281-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-94281-36"}]': finished 2026-03-09T20:24:18.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:18 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-27", "mode": "writeback"}]': finished 2026-03-09T20:24:18.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:18 vm05 ceph-mon[61345]: osdmap e208: 8 total, 8 up, 8 in 2026-03-09T20:24:18.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:18 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:18.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:18 vm05 ceph-mon[51870]: from='client.50185 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-94281-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-94281-36"}]': finished 2026-03-09T20:24:18.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:18 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-27", "mode": "writeback"}]': finished 2026-03-09T20:24:18.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:18 vm05 ceph-mon[51870]: osdmap e208: 8 total, 8 up, 8 in 2026-03-09T20:24:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:24:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:24:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:24:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:18 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:18 vm09 ceph-mon[54524]: from='client.50185 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-94281-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-94281-36"}]': finished 2026-03-09T20:24:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:18 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-27", "mode": "writeback"}]': finished 2026-03-09T20:24:19.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:18 vm09 ceph-mon[54524]: osdmap e208: 8 total, 8 up, 8 in 2026-03-09T20:24:19.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:19 vm05 ceph-mon[61345]: pgmap v259: 332 pgs: 8 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 318 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:24:19.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:19 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:19.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:19 vm05 ceph-mon[61345]: osdmap e209: 8 total, 8 up, 8 in 2026-03-09T20:24:19.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:19.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:19 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:19.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:19 vm05 ceph-mon[51870]: pgmap v259: 332 pgs: 8 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 318 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:24:19.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:19 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:19 vm05 ceph-mon[51870]: osdmap e209: 8 total, 8 up, 8 in 2026-03-09T20:24:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:19 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:19 vm09 ceph-mon[54524]: pgmap v259: 332 pgs: 8 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 318 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:24:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:19 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:19 vm09 ceph-mon[54524]: osdmap e209: 8 total, 8 up, 8 in 2026-03-09T20:24:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:19 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:21.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:21 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:21.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:21 vm09 ceph-mon[54524]: osdmap e210: 8 total, 8 up, 8 in 2026-03-09T20:24:21.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27"}]: dispatch 2026-03-09T20:24:21.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:21.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/291595497' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-94310-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:21.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:21 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27"}]: dispatch 2026-03-09T20:24:21.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:21 vm09 ceph-mon[54524]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:21.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:21 vm09 ceph-mon[54524]: from='client.50188 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-94310-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:21.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:21 vm09 ceph-mon[54524]: pgmap v262: 324 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:21.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:21.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[61345]: osdmap e210: 8 total, 8 up, 8 in 2026-03-09T20:24:21.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27"}]: dispatch 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/291595497' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-94310-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27"}]: dispatch 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[61345]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[61345]: from='client.50188 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-94310-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[61345]: pgmap v262: 324 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[51870]: osdmap e210: 8 total, 8 up, 8 in 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27"}]: dispatch 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/291595497' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-94310-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27"}]: dispatch 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[51870]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[51870]: from='client.50188 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-94310-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:21 vm05 ceph-mon[51870]: pgmap v262: 324 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 456 KiB data, 706 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:22.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:22 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:22.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:22 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27"}]': finished 2026-03-09T20:24:22.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:22 vm09 ceph-mon[54524]: from='client.50185 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]': finished 2026-03-09T20:24:22.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:22 vm09 ceph-mon[54524]: from='client.50188 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-94310-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:22.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:22.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:22 vm09 ceph-mon[54524]: osdmap e211: 8 total, 8 up, 8 in 2026-03-09T20:24:22.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:22 vm09 ceph-mon[54524]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:22.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:22.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27"}]': finished 2026-03-09T20:24:22.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[61345]: from='client.50185 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]': finished 2026-03-09T20:24:22.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[61345]: from='client.50188 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-94310-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:22.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:22.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[61345]: osdmap e211: 8 total, 8 up, 8 in 2026-03-09T20:24:22.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[61345]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:22.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:22.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-27"}]': finished 2026-03-09T20:24:22.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[51870]: from='client.50185 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-94281-36"}]': finished 2026-03-09T20:24:22.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[51870]: from='client.50188 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-94310-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:22.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/610773330' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:22.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[51870]: osdmap e211: 8 total, 8 up, 8 in 2026-03-09T20:24:22.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:22 vm05 ceph-mon[51870]: from='client.50185 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]: dispatch 2026-03-09T20:24:23.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: pgmap v264: 324 pgs: 32 creating+peering, 8 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 456 KiB data, 707 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 2 op/s 2026-03-09T20:24:23.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.50185 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]': finished 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: osdmap e212: 8 total, 8 up, 8 in 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-94281-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-94281-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.50197 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-94281-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: osdmap e213: 8 total, 8 up, 8 in 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-94281-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/958421944' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-94310-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-94281-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:23 vm09 ceph-mon[54524]: from='client.50191 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-94310-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:23.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: pgmap v264: 324 pgs: 32 creating+peering, 8 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 456 KiB data, 707 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 2 op/s 2026-03-09T20:24:23.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.50185 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]': finished 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: osdmap e212: 8 total, 8 up, 8 in 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-94281-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-94281-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.50197 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-94281-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: osdmap e213: 8 total, 8 up, 8 in 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-94281-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/958421944' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-94310-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-94281-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[61345]: from='client.50191 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-94310-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: pgmap v264: 324 pgs: 32 creating+peering, 8 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 456 KiB data, 707 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 2 op/s 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.50185 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-94281-36"}]': finished 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: osdmap e212: 8 total, 8 up, 8 in 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-94281-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-94281-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.50197 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-94281-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: osdmap e213: 8 total, 8 up, 8 in 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-94281-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/958421944' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-94310-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-94281-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:23 vm05 ceph-mon[51870]: from='client.50191 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-94310-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:25 vm09 ceph-mon[54524]: pgmap v267: 324 pgs: 64 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 456 KiB data, 707 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:24:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:25 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:25 vm09 ceph-mon[54524]: from='client.50191 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-94310-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:25 vm09 ceph-mon[54524]: osdmap e214: 8 total, 8 up, 8 in 2026-03-09T20:24:25.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:25 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:25.522 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:24:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:24:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:25 vm05 ceph-mon[61345]: pgmap v267: 324 pgs: 64 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 456 KiB data, 707 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:24:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:25 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:25 vm05 ceph-mon[61345]: from='client.50191 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-94310-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:25 vm05 ceph-mon[61345]: osdmap e214: 8 total, 8 up, 8 in 2026-03-09T20:24:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:25 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:25.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:25 vm05 ceph-mon[51870]: pgmap v267: 324 pgs: 64 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 456 KiB data, 707 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:24:25.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:25 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:25.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:25 vm05 ceph-mon[51870]: from='client.50191 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-94310-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:25.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:25 vm05 ceph-mon[51870]: osdmap e214: 8 total, 8 up, 8 in 2026-03-09T20:24:25.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:25 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:26 vm09 ceph-mon[54524]: from='client.50197 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-94281-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-94281-37"}]': finished 2026-03-09T20:24:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:26 vm09 ceph-mon[54524]: osdmap e215: 8 total, 8 up, 8 in 2026-03-09T20:24:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:26.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:26 vm05 ceph-mon[61345]: from='client.50197 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-94281-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-94281-37"}]': finished 2026-03-09T20:24:26.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:26 vm05 ceph-mon[61345]: osdmap e215: 8 total, 8 up, 8 in 2026-03-09T20:24:26.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:26.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:26.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:26.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:26 vm05 ceph-mon[51870]: from='client.50197 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-94281-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-94281-37"}]': finished 2026-03-09T20:24:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:26 vm05 ceph-mon[51870]: osdmap e215: 8 total, 8 up, 8 in 2026-03-09T20:24:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:27.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:27 vm05 ceph-mon[61345]: pgmap v270: 300 pgs: 6 creating+peering, 2 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 4.4 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:24:27.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:27.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:27 vm05 ceph-mon[61345]: osdmap e216: 8 total, 8 up, 8 in 2026-03-09T20:24:27.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:27.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:27.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:27 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:27 vm05 ceph-mon[51870]: pgmap v270: 300 pgs: 6 creating+peering, 2 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 4.4 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:24:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:27 vm05 ceph-mon[51870]: osdmap e216: 8 total, 8 up, 8 in 2026-03-09T20:24:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:27 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:27.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:27 vm09 ceph-mon[54524]: pgmap v270: 300 pgs: 6 creating+peering, 2 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 4.4 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:24:27.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:27.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:27 vm09 ceph-mon[54524]: osdmap e216: 8 total, 8 up, 8 in 2026-03-09T20:24:27.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:27.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:27.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:27 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:28.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:28 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:28.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-29"}]: dispatch 2026-03-09T20:24:28.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:28 vm05 ceph-mon[61345]: osdmap e217: 8 total, 8 up, 8 in 2026-03-09T20:24:28.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:28 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-29"}]: dispatch 2026-03-09T20:24:28.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:28.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:28 vm05 ceph-mon[61345]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:28.633 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:28 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:28.633 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-29"}]: dispatch 2026-03-09T20:24:28.633 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:28 vm05 ceph-mon[51870]: osdmap e217: 8 total, 8 up, 8 in 2026-03-09T20:24:28.633 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:28 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-29"}]: dispatch 2026-03-09T20:24:28.633 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:28.633 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:28 vm05 ceph-mon[51870]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:28.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:28 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:28.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-29"}]: dispatch 2026-03-09T20:24:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:28 vm09 ceph-mon[54524]: osdmap e217: 8 total, 8 up, 8 in 2026-03-09T20:24:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:28 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-29"}]: dispatch 2026-03-09T20:24:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:28 vm09 ceph-mon[54524]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:28.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:24:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:24:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:24:29.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: pgmap v273: 292 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 4.4 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-09T20:24:29.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-46"}]': finished 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-29"}]': finished 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: from='client.50197 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]': finished 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-29", "mode": "writeback"}]: dispatch 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: osdmap e218: 8 total, 8 up, 8 in 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-29", "mode": "writeback"}]: dispatch 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: from='client.50197 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]': finished 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-29", "mode": "writeback"}]': finished 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: osdmap e219: 8 total, 8 up, 8 in 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: pgmap v273: 292 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 4.4 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-46"}]': finished 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-29"}]': finished 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: from='client.50197 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]': finished 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-29", "mode": "writeback"}]: dispatch 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: osdmap e218: 8 total, 8 up, 8 in 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-29", "mode": "writeback"}]: dispatch 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: from='client.50197 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]': finished 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-29", "mode": "writeback"}]': finished 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: osdmap e219: 8 total, 8 up, 8 in 2026-03-09T20:24:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: pgmap v273: 292 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 4.4 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-09T20:24:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-46"}]': finished 2026-03-09T20:24:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-29"}]': finished 2026-03-09T20:24:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: from='client.50197 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-94281-37"}]': finished 2026-03-09T20:24:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3660549493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-29", "mode": "writeback"}]: dispatch 2026-03-09T20:24:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: osdmap e218: 8 total, 8 up, 8 in 2026-03-09T20:24:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: from='client.50197 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]: dispatch 2026-03-09T20:24:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-29", "mode": "writeback"}]: dispatch 2026-03-09T20:24:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: from='client.50197 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-94281-37"}]': finished 2026-03-09T20:24:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-29", "mode": "writeback"}]': finished 2026-03-09T20:24:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: osdmap e219: 8 total, 8 up, 8 in 2026-03-09T20:24:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[61345]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[61345]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-94281-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[61345]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-94281-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[61345]: from='client.50206 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-94281-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[61345]: osdmap e220: 8 total, 8 up, 8 in 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-94281-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[61345]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-94281-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[51870]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[51870]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-94281-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[51870]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-94281-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[51870]: from='client.50206 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-94281-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[51870]: osdmap e220: 8 total, 8 up, 8 in 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-94281-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[51870]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-94281-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:30 vm09 ceph-mon[54524]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:30 vm09 ceph-mon[54524]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-94281-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:30 vm09 ceph-mon[54524]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-94281-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:30 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:30 vm09 ceph-mon[54524]: from='client.50206 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-94281-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:30 vm09 ceph-mon[54524]: osdmap e220: 8 total, 8 up, 8 in 2026-03-09T20:24:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-94281-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:30 vm09 ceph-mon[54524]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-94281-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:31.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:31 vm05 ceph-mon[61345]: pgmap v276: 300 pgs: 8 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 4.4 MiB data, 720 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:31.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:31 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:31.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-46"}]': finished 2026-03-09T20:24:31.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:31 vm05 ceph-mon[61345]: osdmap e221: 8 total, 8 up, 8 in 2026-03-09T20:24:31.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:31.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:31 vm05 ceph-mon[51870]: pgmap v276: 300 pgs: 8 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 4.4 MiB data, 720 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:31.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:31 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:31.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-46"}]': finished 2026-03-09T20:24:31.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:31 vm05 ceph-mon[51870]: osdmap e221: 8 total, 8 up, 8 in 2026-03-09T20:24:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:31.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:31 vm09 ceph-mon[54524]: pgmap v276: 300 pgs: 8 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 4.4 MiB data, 720 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:31.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:31 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:31.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-46"}]': finished 2026-03-09T20:24:31.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:31 vm09 ceph-mon[54524]: osdmap e221: 8 total, 8 up, 8 in 2026-03-09T20:24:31.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-46"}]: dispatch 2026-03-09T20:24:33.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[61345]: pgmap v279: 292 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 12 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s rd, 255 B/s wr, 19 op/s 2026-03-09T20:24:33.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[61345]: from='client.50206 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-94281-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-94281-38"}]': finished 2026-03-09T20:24:33.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-46"}]': finished 2026-03-09T20:24:33.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[61345]: osdmap e222: 8 total, 8 up, 8 in 2026-03-09T20:24:33.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:33.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[61345]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:33.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:33.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[61345]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:33.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:33.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[61345]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[51870]: pgmap v279: 292 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 12 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s rd, 255 B/s wr, 19 op/s 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[51870]: from='client.50206 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-94281-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-94281-38"}]': finished 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-46"}]': finished 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[51870]: osdmap e222: 8 total, 8 up, 8 in 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[51870]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[51870]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[51870]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:33.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:33 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:33 vm09 ceph-mon[54524]: pgmap v279: 292 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 12 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s rd, 255 B/s wr, 19 op/s 2026-03-09T20:24:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:33 vm09 ceph-mon[54524]: from='client.50206 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-94281-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-94281-38"}]': finished 2026-03-09T20:24:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/100059633' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-46"}]': finished 2026-03-09T20:24:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:33 vm09 ceph-mon[54524]: osdmap e222: 8 total, 8 up, 8 in 2026-03-09T20:24:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:33 vm09 ceph-mon[54524]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:33 vm09 ceph-mon[54524]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:33 vm09 ceph-mon[54524]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:33 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:34.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[61345]: from='client.50212 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:34.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:34.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29"}]: dispatch 2026-03-09T20:24:34.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:34.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[61345]: osdmap e223: 8 total, 8 up, 8 in 2026-03-09T20:24:34.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29"}]: dispatch 2026-03-09T20:24:34.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[61345]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:34.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[51870]: from='client.50212 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:34.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:34.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29"}]: dispatch 2026-03-09T20:24:34.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:34.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[51870]: osdmap e223: 8 total, 8 up, 8 in 2026-03-09T20:24:34.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29"}]: dispatch 2026-03-09T20:24:34.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:34 vm05 ceph-mon[51870]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:34 vm09 ceph-mon[54524]: from='client.50212 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-94310-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:34 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29"}]: dispatch 2026-03-09T20:24:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:34 vm09 ceph-mon[54524]: osdmap e223: 8 total, 8 up, 8 in 2026-03-09T20:24:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:34 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29"}]: dispatch 2026-03-09T20:24:34.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:34 vm09 ceph-mon[54524]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:35.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:35 vm05 ceph-mon[61345]: pgmap v282: 300 pgs: 8 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 12 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s rd, 255 B/s wr, 19 op/s 2026-03-09T20:24:35.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:35 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:35.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:35 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29"}]': finished 2026-03-09T20:24:35.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:35 vm05 ceph-mon[61345]: osdmap e224: 8 total, 8 up, 8 in 2026-03-09T20:24:35.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:35 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:35.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:35 vm05 ceph-mon[61345]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:35.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:35 vm05 ceph-mon[51870]: pgmap v282: 300 pgs: 8 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 12 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s rd, 255 B/s wr, 19 op/s 2026-03-09T20:24:35.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:35 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:35.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:35 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29"}]': finished 2026-03-09T20:24:35.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:35 vm05 ceph-mon[51870]: osdmap e224: 8 total, 8 up, 8 in 2026-03-09T20:24:35.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:35 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:35.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:35 vm05 ceph-mon[51870]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:35.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:35 vm09 ceph-mon[54524]: pgmap v282: 300 pgs: 8 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 12 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s rd, 255 B/s wr, 19 op/s 2026-03-09T20:24:35.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:35 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:35.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:35 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-29"}]': finished 2026-03-09T20:24:35.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:35 vm09 ceph-mon[54524]: osdmap e224: 8 total, 8 up, 8 in 2026-03-09T20:24:35.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:35 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:35.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:35 vm09 ceph-mon[54524]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:35.772 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:24:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:24:36.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.50212 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-47"}]': finished 2026-03-09T20:24:36.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.50206 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]': finished 2026-03-09T20:24:36.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: osdmap e225: 8 total, 8 up, 8 in 2026-03-09T20:24:36.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:36.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:36.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.50206 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]': finished 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: osdmap e226: 8 total, 8 up, 8 in 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-94281-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[61345]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-94281-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.50212 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-47"}]': finished 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.50206 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]': finished 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: osdmap e225: 8 total, 8 up, 8 in 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.50206 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]': finished 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: osdmap e226: 8 total, 8 up, 8 in 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-94281-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:36 vm05 ceph-mon[51870]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-94281-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.50212 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-94310-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-94310-47"}]': finished 2026-03-09T20:24:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.50206 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-94281-38"}]': finished 2026-03-09T20:24:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: osdmap e225: 8 total, 8 up, 8 in 2026-03-09T20:24:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2079342607' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.50206 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]: dispatch 2026-03-09T20:24:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.50206 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-94281-38"}]': finished 2026-03-09T20:24:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: osdmap e226: 8 total, 8 up, 8 in 2026-03-09T20:24:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-94281-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:36 vm09 ceph-mon[54524]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-94281-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:37.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[61345]: pgmap v286: 300 pgs: 4 creating+peering, 36 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 8.4 MiB data, 737 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:37.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:37.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:37.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[61345]: from='client.49496 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-94281-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:37.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[61345]: osdmap e227: 8 total, 8 up, 8 in 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[61345]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-94281-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[61345]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-94281-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[51870]: pgmap v286: 300 pgs: 4 creating+peering, 36 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 8.4 MiB data, 737 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[51870]: from='client.49496 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-94281-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[51870]: osdmap e227: 8 total, 8 up, 8 in 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[51870]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-94281-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:37 vm05 ceph-mon[51870]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-94281-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:37.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:37 vm09 ceph-mon[54524]: pgmap v286: 300 pgs: 4 creating+peering, 36 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 8.4 MiB data, 737 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:37.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:37 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:37.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:37 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:37.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:37 vm09 ceph-mon[54524]: from='client.49496 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-94281-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:37.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:37 vm09 ceph-mon[54524]: osdmap e227: 8 total, 8 up, 8 in 2026-03-09T20:24:37.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:37 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:37 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:37 vm09 ceph-mon[54524]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:37 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:37 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-94281-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:37 vm09 ceph-mon[54524]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-94281-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:38.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:24:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:24:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:24:39.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[61345]: from='client.50212 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]': finished 2026-03-09T20:24:39.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:39.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:39.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-31"}]: dispatch 2026-03-09T20:24:39.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[61345]: osdmap e228: 8 total, 8 up, 8 in 2026-03-09T20:24:39.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[61345]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:39.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-31"}]: dispatch 2026-03-09T20:24:39.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[61345]: pgmap v289: 292 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 8.4 MiB data, 737 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:39.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[51870]: from='client.50212 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]': finished 2026-03-09T20:24:39.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:39.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:39.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-31"}]: dispatch 2026-03-09T20:24:39.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[51870]: osdmap e228: 8 total, 8 up, 8 in 2026-03-09T20:24:39.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[51870]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:39.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-31"}]: dispatch 2026-03-09T20:24:39.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:39 vm05 ceph-mon[51870]: pgmap v289: 292 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 8.4 MiB data, 737 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:39.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:39 vm09 ceph-mon[54524]: from='client.50212 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-94310-47"}]': finished 2026-03-09T20:24:39.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:39 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:39.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/833328814' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:39.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-31"}]: dispatch 2026-03-09T20:24:39.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:39 vm09 ceph-mon[54524]: osdmap e228: 8 total, 8 up, 8 in 2026-03-09T20:24:39.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:39 vm09 ceph-mon[54524]: from='client.50212 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]: dispatch 2026-03-09T20:24:39.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:39 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-31"}]: dispatch 2026-03-09T20:24:39.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:39 vm09 ceph-mon[54524]: pgmap v289: 292 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 8.4 MiB data, 737 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:40.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[61345]: from='client.49496 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-94281-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-94281-39"}]': finished 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[61345]: from='client.50212 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]': finished 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-31"}]': finished 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[61345]: osdmap e229: 8 total, 8 up, 8 in 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-31", "mode": "writeback"}]: dispatch 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-31", "mode": "writeback"}]: dispatch 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-94310-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[51870]: from='client.49496 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-94281-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-94281-39"}]': finished 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[51870]: from='client.50212 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]': finished 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-31"}]': finished 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[51870]: osdmap e229: 8 total, 8 up, 8 in 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-31", "mode": "writeback"}]: dispatch 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-31", "mode": "writeback"}]: dispatch 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-94310-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:40.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:40 vm09 ceph-mon[54524]: from='client.49496 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-94281-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-94281-39"}]': finished 2026-03-09T20:24:40.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:40 vm09 ceph-mon[54524]: from='client.50212 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-94310-47"}]': finished 2026-03-09T20:24:40.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-31"}]': finished 2026-03-09T20:24:40.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:40 vm09 ceph-mon[54524]: osdmap e229: 8 total, 8 up, 8 in 2026-03-09T20:24:40.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-31", "mode": "writeback"}]: dispatch 2026-03-09T20:24:40.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-31", "mode": "writeback"}]: dispatch 2026-03-09T20:24:40.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:40.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:40.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-94310-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:41.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-31", "mode": "writeback"}]': finished 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-94310-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[61345]: osdmap e230: 8 total, 8 up, 8 in 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-94310-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[61345]: pgmap v292: 300 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 8.4 MiB data, 737 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[61345]: osdmap e231: 8 total, 8 up, 8 in 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-31", "mode": "writeback"}]': finished 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-94310-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[51870]: osdmap e230: 8 total, 8 up, 8 in 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-94310-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[51870]: pgmap v292: 300 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 8.4 MiB data, 737 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:41 vm05 ceph-mon[51870]: osdmap e231: 8 total, 8 up, 8 in 2026-03-09T20:24:41.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:41 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:41.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:41 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-31", "mode": "writeback"}]': finished 2026-03-09T20:24:41.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-94310-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:41.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:41 vm09 ceph-mon[54524]: osdmap e230: 8 total, 8 up, 8 in 2026-03-09T20:24:41.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-94310-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:41.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:41.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:41 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:41.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:41 vm09 ceph-mon[54524]: pgmap v292: 300 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 8.4 MiB data, 737 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:41.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:41 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:41.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:41 vm09 ceph-mon[54524]: osdmap e231: 8 total, 8 up, 8 in 2026-03-09T20:24:42.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31"}]: dispatch 2026-03-09T20:24:42.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:42.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:42 vm09 ceph-mon[54524]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:42.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:42 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31"}]: dispatch 2026-03-09T20:24:42.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:42 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:42.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31"}]: dispatch 2026-03-09T20:24:42.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:42.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:42 vm05 ceph-mon[61345]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:42.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:42 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31"}]: dispatch 2026-03-09T20:24:42.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:42 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:42.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31"}]: dispatch 2026-03-09T20:24:42.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:42.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:42 vm05 ceph-mon[51870]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:42.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:42 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31"}]: dispatch 2026-03-09T20:24:42.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:42 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:43.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:43 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-94310-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-94310-48"}]': finished 2026-03-09T20:24:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:43 vm09 ceph-mon[54524]: from='client.49496 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]': finished 2026-03-09T20:24:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:43 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31"}]': finished 2026-03-09T20:24:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:43 vm09 ceph-mon[54524]: osdmap e232: 8 total, 8 up, 8 in 2026-03-09T20:24:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:43 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:43 vm09 ceph-mon[54524]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:43 vm09 ceph-mon[54524]: pgmap v295: 300 pgs: 8 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 4.4 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T20:24:43.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-94310-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-94310-48"}]': finished 2026-03-09T20:24:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[61345]: from='client.49496 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]': finished 2026-03-09T20:24:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31"}]': finished 2026-03-09T20:24:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[61345]: osdmap e232: 8 total, 8 up, 8 in 2026-03-09T20:24:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[61345]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[61345]: pgmap v295: 300 pgs: 8 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 4.4 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T20:24:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-94310-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-94310-48"}]': finished 2026-03-09T20:24:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[51870]: from='client.49496 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-94281-39"}]': finished 2026-03-09T20:24:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-31"}]': finished 2026-03-09T20:24:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[51870]: osdmap e232: 8 total, 8 up, 8 in 2026-03-09T20:24:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/904593530' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[51870]: from='client.49496 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]: dispatch 2026-03-09T20:24:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:43 vm05 ceph-mon[51870]: pgmap v295: 300 pgs: 8 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 286 active+clean; 4.4 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T20:24:44.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:44 vm09 ceph-mon[54524]: from='client.49496 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]': finished 2026-03-09T20:24:44.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:44 vm09 ceph-mon[54524]: osdmap e233: 8 total, 8 up, 8 in 2026-03-09T20:24:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-94281-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-94281-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:44 vm09 ceph-mon[54524]: osdmap e234: 8 total, 8 up, 8 in 2026-03-09T20:24:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm05-94281-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:44 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:44.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[61345]: from='client.49496 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]': finished 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[61345]: osdmap e233: 8 total, 8 up, 8 in 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-94281-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-94281-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[61345]: osdmap e234: 8 total, 8 up, 8 in 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm05-94281-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[51870]: from='client.49496 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-94281-39"}]': finished 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[51870]: osdmap e233: 8 total, 8 up, 8 in 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-94281-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-94281-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[51870]: osdmap e234: 8 total, 8 up, 8 in 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm05-94281-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:44.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:44 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:45 vm09 ceph-mon[54524]: pgmap v298: 292 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 4.4 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:24:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-94310-48"}]': finished 2026-03-09T20:24:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:45 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:45 vm09 ceph-mon[54524]: osdmap e235: 8 total, 8 up, 8 in 2026-03-09T20:24:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:45 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:45.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:24:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:24:45.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[51870]: pgmap v298: 292 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 4.4 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:24:45.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-94310-48"}]': finished 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[51870]: osdmap e235: 8 total, 8 up, 8 in 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[61345]: pgmap v298: 292 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 254 active+clean; 4.4 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-94310-48"}]': finished 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[61345]: osdmap e235: 8 total, 8 up, 8 in 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-94310-48"}]: dispatch 2026-03-09T20:24:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:45 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:46.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:46.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:46.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm05-94281-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-94281-40"}]': finished 2026-03-09T20:24:46.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-94310-48"}]': finished 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: osdmap e236: 8 total, 8 up, 8 in 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-33"}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-33"}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-94310-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[61345]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-94310-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm05-94281-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-94281-40"}]': finished 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-94310-48"}]': finished 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: osdmap e236: 8 total, 8 up, 8 in 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-33"}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-33"}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-94310-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:46 vm05 ceph-mon[51870]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-94310-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:47.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:47.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:47.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm05-94281-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-94281-40"}]': finished 2026-03-09T20:24:47.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1126308454' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-94310-48"}]': finished 2026-03-09T20:24:47.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:47.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: osdmap e236: 8 total, 8 up, 8 in 2026-03-09T20:24:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-33"}]: dispatch 2026-03-09T20:24:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-33"}]: dispatch 2026-03-09T20:24:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-94310-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:46 vm09 ceph-mon[54524]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-94310-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:48.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:47 vm09 ceph-mon[54524]: pgmap v301: 300 pgs: 8 unknown, 11 creating+activating, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 276 active+clean; 4.4 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:24:48.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:47 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-33"}]': finished 2026-03-09T20:24:48.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:47 vm09 ceph-mon[54524]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-94310-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:48.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-33", "mode": "writeback"}]: dispatch 2026-03-09T20:24:48.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-94310-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:48.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:47 vm09 ceph-mon[54524]: osdmap e237: 8 total, 8 up, 8 in 2026-03-09T20:24:48.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:47 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-33", "mode": "writeback"}]: dispatch 2026-03-09T20:24:48.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:47 vm09 ceph-mon[54524]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-94310-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[61345]: pgmap v301: 300 pgs: 8 unknown, 11 creating+activating, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 276 active+clean; 4.4 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-33"}]': finished 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[61345]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-94310-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-33", "mode": "writeback"}]: dispatch 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-94310-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[61345]: osdmap e237: 8 total, 8 up, 8 in 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-33", "mode": "writeback"}]: dispatch 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[61345]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-94310-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[51870]: pgmap v301: 300 pgs: 8 unknown, 11 creating+activating, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 276 active+clean; 4.4 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-33"}]': finished 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[51870]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-94310-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-33", "mode": "writeback"}]: dispatch 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-94310-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:48.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[51870]: osdmap e237: 8 total, 8 up, 8 in 2026-03-09T20:24:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-33", "mode": "writeback"}]: dispatch 2026-03-09T20:24:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:47 vm05 ceph-mon[51870]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-94310-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:48.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:48 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:48.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:48 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-33", "mode": "writeback"}]': finished 2026-03-09T20:24:48.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:48 vm05 ceph-mon[61345]: osdmap e238: 8 total, 8 up, 8 in 2026-03-09T20:24:48.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:48 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:48.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:48 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:48.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:48 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:48.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:48 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:48.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:48 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-33", "mode": "writeback"}]': finished 2026-03-09T20:24:48.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:48 vm05 ceph-mon[51870]: osdmap e238: 8 total, 8 up, 8 in 2026-03-09T20:24:48.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:48 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:48.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:48 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:48.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:48 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:24:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:24:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:24:49.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:48 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:49.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:48 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-33", "mode": "writeback"}]': finished 2026-03-09T20:24:49.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:48 vm09 ceph-mon[54524]: osdmap e238: 8 total, 8 up, 8 in 2026-03-09T20:24:49.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:48 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:49.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:48 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:49.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:48 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:50.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:49 vm09 ceph-mon[54524]: pgmap v304: 292 pgs: 11 creating+activating, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 276 active+clean; 4.4 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:24:50.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:49 vm09 ceph-mon[54524]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-94310-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-94310-49"}]': finished 2026-03-09T20:24:50.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-94281-40"}]': finished 2026-03-09T20:24:50.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:49 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:50.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33"}]: dispatch 2026-03-09T20:24:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:49 vm09 ceph-mon[54524]: osdmap e239: 8 total, 8 up, 8 in 2026-03-09T20:24:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:49 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33"}]: dispatch 2026-03-09T20:24:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:49 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:24:50.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[51870]: pgmap v304: 292 pgs: 11 creating+activating, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 276 active+clean; 4.4 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:24:50.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[51870]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-94310-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-94310-49"}]': finished 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-94281-40"}]': finished 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33"}]: dispatch 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[51870]: osdmap e239: 8 total, 8 up, 8 in 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33"}]: dispatch 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[61345]: pgmap v304: 292 pgs: 11 creating+activating, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 276 active+clean; 4.4 MiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[61345]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-94310-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-94310-49"}]': finished 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-94281-40"}]': finished 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33"}]: dispatch 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[61345]: osdmap e239: 8 total, 8 up, 8 in 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-94281-40"}]: dispatch 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33"}]: dispatch 2026-03-09T20:24:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:49 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:24:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:50 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:24:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:50 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:24:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:50 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:50 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-94281-40"}]': finished 2026-03-09T20:24:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:50 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33"}]': finished 2026-03-09T20:24:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:50 vm09 ceph-mon[54524]: osdmap e240: 8 total, 8 up, 8 in 2026-03-09T20:24:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-94281-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-94281-40"}]': finished 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33"}]': finished 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[61345]: osdmap e240: 8 total, 8 up, 8 in 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-94281-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:24:51.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:24:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4191163445' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-94281-40"}]': finished 2026-03-09T20:24:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-33"}]': finished 2026-03-09T20:24:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[51870]: osdmap e240: 8 total, 8 up, 8 in 2026-03-09T20:24:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-94281-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:52.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:51 vm09 ceph-mon[54524]: pgmap v307: 300 pgs: 8 unknown, 11 creating+activating, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 276 active+clean; 4.4 MiB data, 714 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:52.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:51 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:52.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-94281-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:51 vm09 ceph-mon[54524]: osdmap e241: 8 total, 8 up, 8 in 2026-03-09T20:24:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-94281-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:51 vm09 ceph-mon[54524]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[61345]: pgmap v307: 300 pgs: 8 unknown, 11 creating+activating, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 276 active+clean; 4.4 MiB data, 714 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-94281-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[61345]: osdmap e241: 8 total, 8 up, 8 in 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-94281-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[61345]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[51870]: pgmap v307: 300 pgs: 8 unknown, 11 creating+activating, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 276 active+clean; 4.4 MiB data, 714 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-94281-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[51870]: osdmap e241: 8 total, 8 up, 8 in 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-94281-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:52.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:51 vm05 ceph-mon[51870]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:53.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:53 vm09 ceph-mon[54524]: pgmap v309: 260 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 255 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 1 op/s 2026-03-09T20:24:53.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:53 vm09 ceph-mon[54524]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]': finished 2026-03-09T20:24:53.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:53.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:53 vm09 ceph-mon[54524]: osdmap e242: 8 total, 8 up, 8 in 2026-03-09T20:24:53.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:53 vm09 ceph-mon[54524]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:53.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:53.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:53 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:53.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[61345]: pgmap v309: 260 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 255 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 1 op/s 2026-03-09T20:24:53.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[61345]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]': finished 2026-03-09T20:24:53.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:53.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[61345]: osdmap e242: 8 total, 8 up, 8 in 2026-03-09T20:24:53.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[61345]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:53.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:53.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:53.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[51870]: pgmap v309: 260 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 255 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 1 op/s 2026-03-09T20:24:53.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[51870]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-94310-49"}]': finished 2026-03-09T20:24:53.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3589855434' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:53.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[51870]: osdmap e242: 8 total, 8 up, 8 in 2026-03-09T20:24:53.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[51870]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]: dispatch 2026-03-09T20:24:53.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:53.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:53 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:24:54.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-94281-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-94281-41"}]': finished 2026-03-09T20:24:54.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[61345]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]': finished 2026-03-09T20:24:54.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:54.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[61345]: osdmap e243: 8 total, 8 up, 8 in 2026-03-09T20:24:54.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:54.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:54.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[61345]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[61345]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-94310-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[61345]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-94310-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-94281-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-94281-41"}]': finished 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[51870]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]': finished 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[51870]: osdmap e243: 8 total, 8 up, 8 in 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[51870]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[51870]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-94310-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:54 vm05 ceph-mon[51870]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-94310-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-94281-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-94281-41"}]': finished 2026-03-09T20:24:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:54 vm09 ceph-mon[54524]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-94310-49"}]': finished 2026-03-09T20:24:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:54 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:24:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:54 vm09 ceph-mon[54524]: osdmap e243: 8 total, 8 up, 8 in 2026-03-09T20:24:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:54 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:24:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:54 vm09 ceph-mon[54524]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:54 vm09 ceph-mon[54524]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-94310-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:54.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:54 vm09 ceph-mon[54524]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-94310-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:55.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[61345]: pgmap v312: 300 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 255 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T20:24:55.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:55.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[61345]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-94310-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:55.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-94310-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:55.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:24:55.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[61345]: osdmap e244: 8 total, 8 up, 8 in 2026-03-09T20:24:55.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[61345]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-94310-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:55.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]': finished 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-35", "mode": "writeback"}]: dispatch 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[61345]: osdmap e245: 8 total, 8 up, 8 in 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[51870]: pgmap v312: 300 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 255 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[51870]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-94310-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-94310-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[51870]: osdmap e244: 8 total, 8 up, 8 in 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[51870]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-94310-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]': finished 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-35", "mode": "writeback"}]: dispatch 2026-03-09T20:24:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:55 vm05 ceph-mon[51870]: osdmap e245: 8 total, 8 up, 8 in 2026-03-09T20:24:55.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:55 vm09 ceph-mon[54524]: pgmap v312: 300 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 255 active+clean; 4.4 MiB data, 715 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 254 B/s wr, 1 op/s 2026-03-09T20:24:55.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:55 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:24:55.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:55 vm09 ceph-mon[54524]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-94310-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:55.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-94310-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:55.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:24:55.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:55 vm09 ceph-mon[54524]: osdmap e244: 8 total, 8 up, 8 in 2026-03-09T20:24:55.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:55 vm09 ceph-mon[54524]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-94310-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:55.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:55 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:24:55.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:55 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]': finished 2026-03-09T20:24:55.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-35", "mode": "writeback"}]: dispatch 2026-03-09T20:24:55.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:55 vm09 ceph-mon[54524]: osdmap e245: 8 total, 8 up, 8 in 2026-03-09T20:24:55.772 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:24:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:24:56.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-35", "mode": "writeback"}]: dispatch 2026-03-09T20:24:56.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[61345]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-94310-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-94310-50"}]': finished 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-35", "mode": "writeback"}]': finished 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-94281-41"}]': finished 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[61345]: osdmap e246: 8 total, 8 up, 8 in 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-35", "mode": "writeback"}]: dispatch 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[51870]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-94310-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-94310-50"}]': finished 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-35", "mode": "writeback"}]': finished 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-94281-41"}]': finished 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[51870]: osdmap e246: 8 total, 8 up, 8 in 2026-03-09T20:24:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:56.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:56 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-35", "mode": "writeback"}]: dispatch 2026-03-09T20:24:56.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:56.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:24:56.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:56 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:24:56.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:56 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:24:56.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:56 vm09 ceph-mon[54524]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-94310-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-94310-50"}]': finished 2026-03-09T20:24:56.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:56 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-35", "mode": "writeback"}]': finished 2026-03-09T20:24:56.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-94281-41"}]': finished 2026-03-09T20:24:56.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:56 vm09 ceph-mon[54524]: osdmap e246: 8 total, 8 up, 8 in 2026-03-09T20:24:56.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-94281-41"}]: dispatch 2026-03-09T20:24:57.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:57 vm05 ceph-mon[61345]: pgmap v315: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:24:57.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-94281-41"}]': finished 2026-03-09T20:24:57.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:57 vm05 ceph-mon[61345]: osdmap e247: 8 total, 8 up, 8 in 2026-03-09T20:24:57.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:57.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:57 vm05 ceph-mon[61345]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:57.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:57 vm05 ceph-mon[51870]: pgmap v315: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:24:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-94281-41"}]': finished 2026-03-09T20:24:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:57 vm05 ceph-mon[51870]: osdmap e247: 8 total, 8 up, 8 in 2026-03-09T20:24:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:57 vm05 ceph-mon[51870]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:57.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:57 vm09 ceph-mon[54524]: pgmap v315: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:24:57.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2696815369' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-94281-41"}]': finished 2026-03-09T20:24:57.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:57 vm09 ceph-mon[54524]: osdmap e247: 8 total, 8 up, 8 in 2026-03-09T20:24:57.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:57.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:57 vm09 ceph-mon[54524]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:57.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[61345]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-94281-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[61345]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-94281-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[61345]: from='client.49523 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-94281-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[61345]: osdmap e248: 8 total, 8 up, 8 in 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[61345]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-94281-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[61345]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-94281-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[51870]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-94281-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[51870]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-94281-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[51870]: from='client.49523 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-94281-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[51870]: osdmap e248: 8 total, 8 up, 8 in 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[51870]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-94281-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:58.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:58 vm05 ceph-mon[51870]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-94281-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:58.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:58 vm09 ceph-mon[54524]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:58.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-94281-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:58.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:58 vm09 ceph-mon[54524]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-94281-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:24:58.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:58 vm09 ceph-mon[54524]: from='client.49523 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-94281-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:24:58.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:58 vm09 ceph-mon[54524]: osdmap e248: 8 total, 8 up, 8 in 2026-03-09T20:24:58.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:58.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:58 vm09 ceph-mon[54524]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:58.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-94281-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:58.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:58 vm09 ceph-mon[54524]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-94281-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:24:58.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:24:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:24:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:24:59.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[61345]: pgmap v318: 300 pgs: 8 unknown, 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:24:59.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:59.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:59.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[61345]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]': finished 2026-03-09T20:24:59.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:59.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:59.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:24:59.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[61345]: osdmap e249: 8 total, 8 up, 8 in 2026-03-09T20:24:59.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[61345]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:59.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:24:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[51870]: pgmap v318: 300 pgs: 8 unknown, 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:24:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[51870]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]': finished 2026-03-09T20:24:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:24:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[51870]: osdmap e249: 8 total, 8 up, 8 in 2026-03-09T20:24:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[51870]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:24:59 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:24:59.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:59 vm09 ceph-mon[54524]: pgmap v318: 300 pgs: 8 unknown, 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:24:59.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:59.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:59 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:24:59.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:59 vm09 ceph-mon[54524]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-94310-50"}]': finished 2026-03-09T20:24:59.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:59 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:24:59.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/711674665' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:59.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:24:59.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:59 vm09 ceph-mon[54524]: osdmap e249: 8 total, 8 up, 8 in 2026-03-09T20:24:59.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:59 vm09 ceph-mon[54524]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]: dispatch 2026-03-09T20:24:59.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:24:59 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:25:00.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:00 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:25:00.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:00.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:00 vm05 ceph-mon[61345]: from='client.49523 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-94281-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-94281-42"}]': finished 2026-03-09T20:25:00.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:00 vm05 ceph-mon[61345]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]': finished 2026-03-09T20:25:00.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:00 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]': finished 2026-03-09T20:25:00.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:00 vm05 ceph-mon[61345]: osdmap e250: 8 total, 8 up, 8 in 2026-03-09T20:25:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:00 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:25:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:00 vm05 ceph-mon[51870]: from='client.49523 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-94281-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-94281-42"}]': finished 2026-03-09T20:25:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:00 vm05 ceph-mon[51870]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]': finished 2026-03-09T20:25:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:00 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]': finished 2026-03-09T20:25:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:00 vm05 ceph-mon[51870]: osdmap e250: 8 total, 8 up, 8 in 2026-03-09T20:25:01.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:00 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:25:01.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:01.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:00 vm09 ceph-mon[54524]: from='client.49523 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-94281-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-94281-42"}]': finished 2026-03-09T20:25:01.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:00 vm09 ceph-mon[54524]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-94310-50"}]': finished 2026-03-09T20:25:01.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:00 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-35"}]': finished 2026-03-09T20:25:01.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:00 vm09 ceph-mon[54524]: osdmap e250: 8 total, 8 up, 8 in 2026-03-09T20:25:01.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[61345]: pgmap v321: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:01.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:01.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[61345]: osdmap e251: 8 total, 8 up, 8 in 2026-03-09T20:25:01.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2449844850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:01.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[61345]: from='client.50251 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:01.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:01.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:01.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[51870]: pgmap v321: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:01.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:01.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[51870]: osdmap e251: 8 total, 8 up, 8 in 2026-03-09T20:25:01.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2449844850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:01.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[51870]: from='client.50251 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:01.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:01.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:01 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:01 vm09 ceph-mon[54524]: pgmap v321: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:01 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:01 vm09 ceph-mon[54524]: osdmap e251: 8 total, 8 up, 8 in 2026-03-09T20:25:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2449844850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:01 vm09 ceph-mon[54524]: from='client.50251 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:02.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:01 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[61345]: from='client.50251 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[61345]: osdmap e252: 8 total, 8 up, 8 in 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[61345]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[61345]: pgmap v325: 324 pgs: 32 unknown, 292 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 2.8 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[51870]: from='client.50251 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[51870]: osdmap e252: 8 total, 8 up, 8 in 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[51870]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:25:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:03 vm05 ceph-mon[51870]: pgmap v325: 324 pgs: 32 unknown, 292 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 2.8 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T20:25:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:03 vm09 ceph-mon[54524]: from='client.50251 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-94310-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:03 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:25:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:03 vm09 ceph-mon[54524]: osdmap e252: 8 total, 8 up, 8 in 2026-03-09T20:25:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:25:03.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:25:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:03 vm09 ceph-mon[54524]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:25:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:03 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35"}]: dispatch 2026-03-09T20:25:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:03 vm09 ceph-mon[54524]: pgmap v325: 324 pgs: 32 unknown, 292 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 2.8 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio api_aio: [ OK ] LibRadosAioEC.SimpleWrite (7114 ms) 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.WaitForComplete 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.WaitForComplete (7182 ms) 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTrip 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTrip (7019 ms) 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTrip2 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTrip2 (7139 ms) 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTripAppend 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTripAppend (7127 ms) 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.IsComplete 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.IsComplete (7121 ms) 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.IsSafe 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.IsSafe (7143 ms) 2026-03-09T20:25:04.075 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.ReturnValue 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.ReturnValue (6945 ms) 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.Flush 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.Flush (7030 ms) 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.FlushAsync 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.FlushAsync (7221 ms) 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTripWriteFull 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTripWriteFull (7062 ms) 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleStat 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleStat (6693 ms) 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleStatNS 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleStatNS (7162 ms) 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.StatRemove 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.StatRemove (7044 ms) 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.ExecuteClass 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.ExecuteClass (7046 ms) 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.MultiWrite 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.MultiWrite (6794 ms) 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [----------] 16 tests from LibRadosAioEC (112843 ms total) 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [----------] Global test environment tear-down 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [==========] 42 tests from 2 test suites ran. (194465 ms total) 2026-03-09T20:25:04.076 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ PASSED ] 42 tests. 2026-03-09T20:25:04.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[61345]: from='client.49523 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]': finished 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35"}]': finished 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[61345]: osdmap e253: 8 total, 8 up, 8 in 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[61345]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[51870]: from='client.49523 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]': finished 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35"}]': finished 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[51870]: osdmap e253: 8 total, 8 up, 8 in 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[51870]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:04 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:25:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:04 vm09 ceph-mon[54524]: from='client.49523 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-94281-42"}]': finished 2026-03-09T20:25:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:04 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-35"}]': finished 2026-03-09T20:25:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:04 vm09 ceph-mon[54524]: osdmap e253: 8 total, 8 up, 8 in 2026-03-09T20:25:04.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2513120840' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:25:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:04 vm09 ceph-mon[54524]: from='client.49523 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]: dispatch 2026-03-09T20:25:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:05.398 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:05 vm09 ceph-mon[54524]: from='client.49523 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]': finished 2026-03-09T20:25:05.398 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:05.398 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:05 vm09 ceph-mon[54524]: osdmap e254: 8 total, 8 up, 8 in 2026-03-09T20:25:05.398 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-94310-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:05.398 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:05 vm09 ceph-mon[54524]: pgmap v328: 260 pgs: 260 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 775 B/s rd, 775 B/s wr, 1 op/s 2026-03-09T20:25:05.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:05 vm05 ceph-mon[61345]: from='client.49523 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]': finished 2026-03-09T20:25:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:05 vm05 ceph-mon[61345]: osdmap e254: 8 total, 8 up, 8 in 2026-03-09T20:25:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-94310-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:05 vm05 ceph-mon[61345]: pgmap v328: 260 pgs: 260 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 775 B/s rd, 775 B/s wr, 1 op/s 2026-03-09T20:25:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:05 vm05 ceph-mon[51870]: from='client.49523 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-94281-42"}]': finished 2026-03-09T20:25:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:05 vm05 ceph-mon[51870]: osdmap e254: 8 total, 8 up, 8 in 2026-03-09T20:25:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-94310-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:05 vm05 ceph-mon[51870]: pgmap v328: 260 pgs: 260 active+clean; 4.4 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 775 B/s rd, 775 B/s wr, 1 op/s 2026-03-09T20:25:05.772 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:25:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:25:06.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[61345]: osdmap e255: 8 total, 8 up, 8 in 2026-03-09T20:25:06.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:06.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:06.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:06.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-94310-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-94310-52"}]': finished 2026-03-09T20:25:06.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[61345]: osdmap e256: 8 total, 8 up, 8 in 2026-03-09T20:25:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[51870]: osdmap e255: 8 total, 8 up, 8 in 2026-03-09T20:25:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-94310-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-94310-52"}]': finished 2026-03-09T20:25:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[51870]: osdmap e256: 8 total, 8 up, 8 in 2026-03-09T20:25:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:06 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:06 vm09 ceph-mon[54524]: osdmap e255: 8 total, 8 up, 8 in 2026-03-09T20:25:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:06 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-94310-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-94310-52"}]': finished 2026-03-09T20:25:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:06 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:06 vm09 ceph-mon[54524]: osdmap e256: 8 total, 8 up, 8 in 2026-03-09T20:25:06.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:06 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:07 vm09 ceph-mon[54524]: pgmap v331: 300 pgs: 8 unknown, 32 creating+peering, 260 active+clean; 4.4 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T20:25:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:07 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:07 vm09 ceph-mon[54524]: osdmap e257: 8 total, 8 up, 8 in 2026-03-09T20:25:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-37"}]: dispatch 2026-03-09T20:25:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:07 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-37"}]: dispatch 2026-03-09T20:25:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:07 vm05 ceph-mon[61345]: pgmap v331: 300 pgs: 8 unknown, 32 creating+peering, 260 active+clean; 4.4 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T20:25:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:07 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:07 vm05 ceph-mon[61345]: osdmap e257: 8 total, 8 up, 8 in 2026-03-09T20:25:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-37"}]: dispatch 2026-03-09T20:25:07.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:07 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-37"}]: dispatch 2026-03-09T20:25:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:07 vm05 ceph-mon[51870]: pgmap v331: 300 pgs: 8 unknown, 32 creating+peering, 260 active+clean; 4.4 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T20:25:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:07 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:07 vm05 ceph-mon[51870]: osdmap e257: 8 total, 8 up, 8 in 2026-03-09T20:25:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-37"}]: dispatch 2026-03-09T20:25:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:07 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-37"}]: dispatch 2026-03-09T20:25:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:08 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-37"}]': finished 2026-03-09T20:25:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-37", "mode": "writeback"}]: dispatch 2026-03-09T20:25:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:08 vm09 ceph-mon[54524]: osdmap e258: 8 total, 8 up, 8 in 2026-03-09T20:25:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-37", "mode": "writeback"}]: dispatch 2026-03-09T20:25:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:08.560 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:08 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:08.560 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:08 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-37"}]': finished 2026-03-09T20:25:08.561 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-37", "mode": "writeback"}]: dispatch 2026-03-09T20:25:08.561 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:08 vm05 ceph-mon[61345]: osdmap e258: 8 total, 8 up, 8 in 2026-03-09T20:25:08.561 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:08 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-37", "mode": "writeback"}]: dispatch 2026-03-09T20:25:08.561 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:08 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-37"}]': finished 2026-03-09T20:25:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-37", "mode": "writeback"}]: dispatch 2026-03-09T20:25:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:08 vm05 ceph-mon[51870]: osdmap e258: 8 total, 8 up, 8 in 2026-03-09T20:25:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-37", "mode": "writeback"}]: dispatch 2026-03-09T20:25:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:08.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:25:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:25:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:25:09.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:09 vm09 ceph-mon[54524]: pgmap v334: 292 pgs: 32 creating+peering, 260 active+clean; 4.4 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:09.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:09 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:25:09.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:09 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-37", "mode": "writeback"}]': finished 2026-03-09T20:25:09.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52"}]': finished 2026-03-09T20:25:09.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:09 vm09 ceph-mon[54524]: osdmap e259: 8 total, 8 up, 8 in 2026-03-09T20:25:09.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:09.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:09.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:09 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[61345]: pgmap v334: 292 pgs: 32 creating+peering, 260 active+clean; 4.4 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:25:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-37", "mode": "writeback"}]': finished 2026-03-09T20:25:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52"}]': finished 2026-03-09T20:25:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[61345]: osdmap e259: 8 total, 8 up, 8 in 2026-03-09T20:25:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:09.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:09.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[51870]: pgmap v334: 292 pgs: 32 creating+peering, 260 active+clean; 4.4 MiB data, 725 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:25:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-37", "mode": "writeback"}]': finished 2026-03-09T20:25:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-94310-52"}]': finished 2026-03-09T20:25:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[51870]: osdmap e259: 8 total, 8 up, 8 in 2026-03-09T20:25:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-94310-52"}]: dispatch 2026-03-09T20:25:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:09 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:11.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-94310-52"}]': finished 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: osdmap e260: 8 total, 8 up, 8 in 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-94310-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-94310-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: pgmap v337: 292 pgs: 32 creating+peering, 260 active+clean; 4.4 MiB data, 725 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37"}]': finished 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-94310-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: osdmap e261: 8 total, 8 up, 8 in 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-94310-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[61345]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-94310-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-94310-52"}]': finished 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: osdmap e260: 8 total, 8 up, 8 in 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-94310-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-94310-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: pgmap v337: 292 pgs: 32 creating+peering, 260 active+clean; 4.4 MiB data, 725 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:25:11.411 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37"}]': finished 2026-03-09T20:25:11.411 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-94310-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:11.411 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: osdmap e261: 8 total, 8 up, 8 in 2026-03-09T20:25:11.411 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-94310-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.411 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:11 vm05 ceph-mon[51870]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-94310-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1177130671' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-94310-52"}]': finished 2026-03-09T20:25:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:25:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37"}]: dispatch 2026-03-09T20:25:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: osdmap e260: 8 total, 8 up, 8 in 2026-03-09T20:25:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37"}]: dispatch 2026-03-09T20:25:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-94310-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-94310-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:11.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: pgmap v337: 292 pgs: 32 creating+peering, 260 active+clean; 4.4 MiB data, 725 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:11.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:25:11.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-37"}]': finished 2026-03-09T20:25:11.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-94310-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:11.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: osdmap e261: 8 total, 8 up, 8 in 2026-03-09T20:25:11.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-94310-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:11.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:11 vm09 ceph-mon[54524]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-94310-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:13.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:13 vm05 ceph-mon[61345]: osdmap e262: 8 total, 8 up, 8 in 2026-03-09T20:25:13.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:13 vm05 ceph-mon[61345]: pgmap v340: 260 pgs: 260 active+clean; 4.4 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T20:25:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:13 vm05 ceph-mon[51870]: osdmap e262: 8 total, 8 up, 8 in 2026-03-09T20:25:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:13 vm05 ceph-mon[51870]: pgmap v340: 260 pgs: 260 active+clean; 4.4 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T20:25:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:13 vm09 ceph-mon[54524]: osdmap e262: 8 total, 8 up, 8 in 2026-03-09T20:25:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:13 vm09 ceph-mon[54524]: pgmap v340: 260 pgs: 260 active+clean; 4.4 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T20:25:14.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:14 vm09 ceph-mon[54524]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-94310-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-94310-53"}]': finished 2026-03-09T20:25:14.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:14 vm09 ceph-mon[54524]: osdmap e263: 8 total, 8 up, 8 in 2026-03-09T20:25:14.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:14 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:14.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:14 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:14.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:14 vm05 ceph-mon[61345]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-94310-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-94310-53"}]': finished 2026-03-09T20:25:14.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:14 vm05 ceph-mon[61345]: osdmap e263: 8 total, 8 up, 8 in 2026-03-09T20:25:14.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:14 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:14.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:14 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:14.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:14 vm05 ceph-mon[51870]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-94310-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-94310-53"}]': finished 2026-03-09T20:25:14.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:14 vm05 ceph-mon[51870]: osdmap e263: 8 total, 8 up, 8 in 2026-03-09T20:25:14.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:14 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:14.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:14 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:15.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:15 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:15 vm09 ceph-mon[54524]: osdmap e264: 8 total, 8 up, 8 in 2026-03-09T20:25:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:15 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:15 vm09 ceph-mon[54524]: pgmap v343: 300 pgs: 40 unknown, 260 active+clean; 4.4 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T20:25:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:15 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:25:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:15 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:15 vm09 ceph-mon[54524]: osdmap e265: 8 total, 8 up, 8 in 2026-03-09T20:25:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-39"}]: dispatch 2026-03-09T20:25:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:15 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-39"}]: dispatch 2026-03-09T20:25:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:15 vm09 ceph-mon[54524]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:15.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:25:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:25:15.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[61345]: osdmap e264: 8 total, 8 up, 8 in 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[61345]: pgmap v343: 300 pgs: 40 unknown, 260 active+clean; 4.4 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[61345]: osdmap e265: 8 total, 8 up, 8 in 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-39"}]: dispatch 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-39"}]: dispatch 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[61345]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[51870]: osdmap e264: 8 total, 8 up, 8 in 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[51870]: pgmap v343: 300 pgs: 40 unknown, 260 active+clean; 4.4 MiB data, 726 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[51870]: osdmap e265: 8 total, 8 up, 8 in 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-39"}]: dispatch 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-39"}]: dispatch 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:15 vm05 ceph-mon[51870]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:16.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:16 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:16.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-39"}]': finished 2026-03-09T20:25:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:16 vm09 ceph-mon[54524]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]': finished 2026-03-09T20:25:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:16 vm09 ceph-mon[54524]: osdmap e266: 8 total, 8 up, 8 in 2026-03-09T20:25:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-39", "mode": "writeback"}]: dispatch 2026-03-09T20:25:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-39", "mode": "writeback"}]: dispatch 2026-03-09T20:25:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:16 vm09 ceph-mon[54524]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:16.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:16.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:16.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-39"}]': finished 2026-03-09T20:25:16.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[61345]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]': finished 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[61345]: osdmap e266: 8 total, 8 up, 8 in 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-39", "mode": "writeback"}]: dispatch 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-39", "mode": "writeback"}]: dispatch 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[61345]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-39"}]': finished 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[51870]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-94310-53"}]': finished 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[51870]: osdmap e266: 8 total, 8 up, 8 in 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-39", "mode": "writeback"}]: dispatch 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-39", "mode": "writeback"}]: dispatch 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1345939571' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:16 vm05 ceph-mon[51870]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]: dispatch 2026-03-09T20:25:17.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:17 vm05 ceph-mon[61345]: pgmap v346: 292 pgs: 292 active+clean; 4.4 MiB data, 731 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:17.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:17 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:25:17.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:17 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-39", "mode": "writeback"}]': finished 2026-03-09T20:25:17.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:17 vm05 ceph-mon[61345]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]': finished 2026-03-09T20:25:17.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:17 vm05 ceph-mon[61345]: osdmap e267: 8 total, 8 up, 8 in 2026-03-09T20:25:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:17 vm05 ceph-mon[51870]: pgmap v346: 292 pgs: 292 active+clean; 4.4 MiB data, 731 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:17 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:25:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:17 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-39", "mode": "writeback"}]': finished 2026-03-09T20:25:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:17 vm05 ceph-mon[51870]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]': finished 2026-03-09T20:25:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:17 vm05 ceph-mon[51870]: osdmap e267: 8 total, 8 up, 8 in 2026-03-09T20:25:17.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:17 vm09 ceph-mon[54524]: pgmap v346: 292 pgs: 292 active+clean; 4.4 MiB data, 731 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:17.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:17 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:25:17.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:17 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-39", "mode": "writeback"}]': finished 2026-03-09T20:25:17.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:17 vm09 ceph-mon[54524]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-94310-53"}]': finished 2026-03-09T20:25:17.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:17 vm09 ceph-mon[54524]: osdmap e267: 8 total, 8 up, 8 in 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[61345]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[61345]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-94310-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[61345]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-94310-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[51870]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[51870]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-94310-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[51870]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-94310-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:18.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:18 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:18.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:18.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:18 vm09 ceph-mon[54524]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:18.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:18.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:18 vm09 ceph-mon[54524]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:18.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-94310-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:18.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:18 vm09 ceph-mon[54524]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-94310-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:18.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:18.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:18 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:25:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:25:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:25:19.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[61345]: pgmap v348: 292 pgs: 292 active+clean; 4.4 MiB data, 731 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:19.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[61345]: from='client.50266 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-94310-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:19.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:25:19.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-94310-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:19.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39"}]: dispatch 2026-03-09T20:25:19.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[61345]: osdmap e268: 8 total, 8 up, 8 in 2026-03-09T20:25:19.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[61345]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-94310-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:19.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39"}]: dispatch 2026-03-09T20:25:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[51870]: pgmap v348: 292 pgs: 292 active+clean; 4.4 MiB data, 731 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[51870]: from='client.50266 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-94310-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:25:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-94310-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39"}]: dispatch 2026-03-09T20:25:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[51870]: osdmap e268: 8 total, 8 up, 8 in 2026-03-09T20:25:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[51870]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-94310-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:19 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39"}]: dispatch 2026-03-09T20:25:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:19 vm09 ceph-mon[54524]: pgmap v348: 292 pgs: 292 active+clean; 4.4 MiB data, 731 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:19 vm09 ceph-mon[54524]: from='client.50266 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-94310-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:19 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:25:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-94310-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39"}]: dispatch 2026-03-09T20:25:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:19 vm09 ceph-mon[54524]: osdmap e268: 8 total, 8 up, 8 in 2026-03-09T20:25:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:19 vm09 ceph-mon[54524]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-94310-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:19.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:19 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39"}]: dispatch 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [==========] Running 77 tests from 4 test suites. 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] Global test environment set-up. 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 3 tests from LibRadosTierPP 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: seed 94573 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.Dirty 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierPP.Dirty (478 ms) 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.FlushWriteRaces 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierPP.FlushWriteRaces (11216 ms) 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.HitSetNone 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierPP.HitSetNone (84 ms) 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 3 tests from LibRadosTierPP (11779 ms total) 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 48 tests from LibRadosTwoPoolsPP 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Overlay 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Overlay (7421 ms) 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Promote 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Promote (7585 ms) 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnap 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnap (9994 ms) 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnapScrub 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: my_snaps [3] 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: my_snaps [4,3] 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: my_snaps [5,4,3] 2026-03-09T20:25:20.317 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: my_snaps [6,5,4,3] 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: promoting some heads 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: promoting from clones for snap 6 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: promoting from clones for snap 5 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: promoting from clones for snap 4 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: promoting from clones for snap 3 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: waiting for scrubs... 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: done waiting 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnapScrub (47334 ms) 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnapTrimRace 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnapTrimRace (10189 ms) 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Whiteout 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Whiteout (8115 ms) 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.WhiteoutDeleteCreate 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.WhiteoutDeleteCreate (8138 ms) 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Evict 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Evict (8095 ms) 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnap 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnap (10217 ms) 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnap2 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnap2 (8942 ms) 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ListSnap 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ListSnap (10196 ms) 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnapRollbackReadRace 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnapRollbackReadRace (13134 ms) 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TryFlush 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TryFlush (7815 ms) 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Flush 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Flush (8071 ms) 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.FlushSnap 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.FlushSnap (12819 ms) 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.FlushTryFlushRaces 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.FlushTryFlushRaces (8039 ms) 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TryFlushReadRace 2026-03-09T20:25:20.318 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TryFlushReadRace (8204 ms) 2026-03-09T20:25:20.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:20 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:25:20.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:20 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39"}]': finished 2026-03-09T20:25:20.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:20 vm05 ceph-mon[61345]: osdmap e269: 8 total, 8 up, 8 in 2026-03-09T20:25:20.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:20 vm05 ceph-mon[61345]: from='client.50266 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-94310-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-94310-54"}]': finished 2026-03-09T20:25:20.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:20 vm05 ceph-mon[61345]: osdmap e270: 8 total, 8 up, 8 in 2026-03-09T20:25:20.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:20 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:25:20.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:20 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39"}]': finished 2026-03-09T20:25:20.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:20 vm05 ceph-mon[51870]: osdmap e269: 8 total, 8 up, 8 in 2026-03-09T20:25:20.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:20 vm05 ceph-mon[51870]: from='client.50266 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-94310-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-94310-54"}]': finished 2026-03-09T20:25:20.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:20 vm05 ceph-mon[51870]: osdmap e270: 8 total, 8 up, 8 in 2026-03-09T20:25:20.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:20 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:25:20.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:20 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-39"}]': finished 2026-03-09T20:25:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:20 vm09 ceph-mon[54524]: osdmap e269: 8 total, 8 up, 8 in 2026-03-09T20:25:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:20 vm09 ceph-mon[54524]: from='client.50266 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-94310-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-94310-54"}]': finished 2026-03-09T20:25:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:20 vm09 ceph-mon[54524]: osdmap e270: 8 total, 8 up, 8 in 2026-03-09T20:25:21.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:21 vm05 ceph-mon[61345]: pgmap v351: 292 pgs: 292 active+clean; 4.4 MiB data, 731 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:21.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:21 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:21 vm05 ceph-mon[61345]: osdmap e271: 8 total, 8 up, 8 in 2026-03-09T20:25:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:21 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:21 vm05 ceph-mon[51870]: pgmap v351: 292 pgs: 292 active+clean; 4.4 MiB data, 731 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:21 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:21 vm05 ceph-mon[51870]: osdmap e271: 8 total, 8 up, 8 in 2026-03-09T20:25:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:21 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:21.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:21 vm09 ceph-mon[54524]: pgmap v351: 292 pgs: 292 active+clean; 4.4 MiB data, 731 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:21 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:21 vm09 ceph-mon[54524]: osdmap e271: 8 total, 8 up, 8 in 2026-03-09T20:25:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:21 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:23.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[61345]: pgmap v354: 300 pgs: 2 creating+peering, 38 unknown, 260 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T20:25:23.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:23.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[61345]: osdmap e272: 8 total, 8 up, 8 in 2026-03-09T20:25:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[61345]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[51870]: pgmap v354: 300 pgs: 2 creating+peering, 38 unknown, 260 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T20:25:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[51870]: osdmap e272: 8 total, 8 up, 8 in 2026-03-09T20:25:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:23 vm05 ceph-mon[51870]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:23.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:23 vm09 ceph-mon[54524]: pgmap v354: 300 pgs: 2 creating+peering, 38 unknown, 260 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T20:25:23.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:23 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:23.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:23.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:23 vm09 ceph-mon[54524]: osdmap e272: 8 total, 8 up, 8 in 2026-03-09T20:25:23.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:23.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:23 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:23.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:23 vm09 ceph-mon[54524]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:24.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:24.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[61345]: from='client.50266 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]': finished 2026-03-09T20:25:24.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[61345]: osdmap e273: 8 total, 8 up, 8 in 2026-03-09T20:25:24.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:25:24.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:24.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:25:24.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[61345]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:24.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:24.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[51870]: from='client.50266 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]': finished 2026-03-09T20:25:24.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[51870]: osdmap e273: 8 total, 8 up, 8 in 2026-03-09T20:25:24.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:25:24.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:24.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:25:24.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:24 vm05 ceph-mon[51870]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:24.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:24 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:24.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:24 vm09 ceph-mon[54524]: from='client.50266 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-94310-54"}]': finished 2026-03-09T20:25:24.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:24 vm09 ceph-mon[54524]: osdmap e273: 8 total, 8 up, 8 in 2026-03-09T20:25:24.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:24 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:25:24.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:24 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2147772507' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:24.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:24 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:25:24.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:24 vm09 ceph-mon[54524]: from='client.50266 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]: dispatch 2026-03-09T20:25:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[61345]: pgmap v357: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T20:25:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:25:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[61345]: from='client.50266 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]': finished 2026-03-09T20:25:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[61345]: osdmap e274: 8 total, 8 up, 8 in 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-94310-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[51870]: pgmap v357: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[51870]: from='client.50266 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]': finished 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[51870]: osdmap e274: 8 total, 8 up, 8 in 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-94310-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:25.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:25 vm09 ceph-mon[54524]: pgmap v357: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T20:25:25.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:25 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:25:25.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:25 vm09 ceph-mon[54524]: from='client.50266 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-94310-54"}]': finished 2026-03-09T20:25:25.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:25:25.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:25 vm09 ceph-mon[54524]: osdmap e274: 8 total, 8 up, 8 in 2026-03-09T20:25:25.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:25 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:25:25.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:25.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-94310-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:25.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:25:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-94310-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[61345]: osdmap e275: 8 total, 8 up, 8 in 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-94310-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[61345]: osdmap e276: 8 total, 8 up, 8 in 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-94310-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[51870]: osdmap e275: 8 total, 8 up, 8 in 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-94310-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T20:25:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:26 vm05 ceph-mon[51870]: osdmap e276: 8 total, 8 up, 8 in 2026-03-09T20:25:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:26 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:25:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-94310-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:26 vm09 ceph-mon[54524]: osdmap e275: 8 total, 8 up, 8 in 2026-03-09T20:25:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-94310-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:25:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:26 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:25:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:26 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T20:25:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:26 vm09 ceph-mon[54524]: osdmap e276: 8 total, 8 up, 8 in 2026-03-09T20:25:27.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:27 vm05 ceph-mon[61345]: pgmap v360: 292 pgs: 292 active+clean; 8.3 MiB data, 731 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:27.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-94310-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-94310-55"}]': finished 2026-03-09T20:25:27.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:27 vm05 ceph-mon[61345]: osdmap e277: 8 total, 8 up, 8 in 2026-03-09T20:25:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:27 vm05 ceph-mon[51870]: pgmap v360: 292 pgs: 292 active+clean; 8.3 MiB data, 731 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-94310-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-94310-55"}]': finished 2026-03-09T20:25:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:27 vm05 ceph-mon[51870]: osdmap e277: 8 total, 8 up, 8 in 2026-03-09T20:25:27.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:27 vm09 ceph-mon[54524]: pgmap v360: 292 pgs: 292 active+clean; 8.3 MiB data, 731 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-94310-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-94310-55"}]': finished 2026-03-09T20:25:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:27 vm09 ceph-mon[54524]: osdmap e277: 8 total, 8 up, 8 in 2026-03-09T20:25:28.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:28.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:28 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:28.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41"}]: dispatch 2026-03-09T20:25:28.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:28 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41"}]: dispatch 2026-03-09T20:25:28.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:28 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41"}]': finished 2026-03-09T20:25:28.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:28 vm05 ceph-mon[61345]: osdmap e278: 8 total, 8 up, 8 in 2026-03-09T20:25:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:28 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41"}]: dispatch 2026-03-09T20:25:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:28 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41"}]: dispatch 2026-03-09T20:25:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:28 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41"}]': finished 2026-03-09T20:25:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:28 vm05 ceph-mon[51870]: osdmap e278: 8 total, 8 up, 8 in 2026-03-09T20:25:28.660 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:25:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:25:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:25:28.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:28 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41"}]: dispatch 2026-03-09T20:25:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:28 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41"}]: dispatch 2026-03-09T20:25:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:28 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-41"}]': finished 2026-03-09T20:25:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:28 vm09 ceph-mon[54524]: osdmap e278: 8 total, 8 up, 8 in 2026-03-09T20:25:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:29 vm09 ceph-mon[54524]: pgmap v363: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 731 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:29 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:29 vm09 ceph-mon[54524]: osdmap e279: 8 total, 8 up, 8 in 2026-03-09T20:25:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:29.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:29 vm05 ceph-mon[61345]: pgmap v363: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 731 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:29 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:29 vm05 ceph-mon[61345]: osdmap e279: 8 total, 8 up, 8 in 2026-03-09T20:25:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:29 vm05 ceph-mon[51870]: pgmap v363: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 731 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:29 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:29 vm05 ceph-mon[51870]: osdmap e279: 8 total, 8 up, 8 in 2026-03-09T20:25:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-94310-55"}]': finished 2026-03-09T20:25:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:30 vm09 ceph-mon[54524]: osdmap e280: 8 total, 8 up, 8 in 2026-03-09T20:25:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:30.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:30 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-94310-55"}]': finished 2026-03-09T20:25:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:30 vm05 ceph-mon[61345]: osdmap e280: 8 total, 8 up, 8 in 2026-03-09T20:25:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:30 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-94310-55"}]': finished 2026-03-09T20:25:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:30 vm05 ceph-mon[51870]: osdmap e280: 8 total, 8 up, 8 in 2026-03-09T20:25:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-94310-55"}]: dispatch 2026-03-09T20:25:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:30 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:31 vm09 ceph-mon[54524]: pgmap v366: 260 pgs: 260 active+clean; 8.3 MiB data, 731 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-94310-55"}]': finished 2026-03-09T20:25:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:31 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:31 vm09 ceph-mon[54524]: osdmap e281: 8 total, 8 up, 8 in 2026-03-09T20:25:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm05-94573-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T20:25:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:31 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:31 vm09 ceph-mon[54524]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:31 vm09 ceph-mon[54524]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-94310-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:31 vm09 ceph-mon[54524]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-94310-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[61345]: pgmap v366: 260 pgs: 260 active+clean; 8.3 MiB data, 731 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-94310-55"}]': finished 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[61345]: osdmap e281: 8 total, 8 up, 8 in 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm05-94573-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[61345]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[61345]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-94310-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[61345]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-94310-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[51870]: pgmap v366: 260 pgs: 260 active+clean; 8.3 MiB data, 731 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2941039056' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-94310-55"}]': finished 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[51870]: osdmap e281: 8 total, 8 up, 8 in 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm05-94573-6","var": "pg_num","format": "json"}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[51870]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[51870]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-94310-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:31 vm05 ceph-mon[51870]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-94310-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:33.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:33 vm09 ceph-mon[54524]: pgmap v369: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:33 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:33 vm09 ceph-mon[54524]: from='client.50275 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-94310-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T20:25:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-94310-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:33 vm09 ceph-mon[54524]: osdmap e282: 8 total, 8 up, 8 in 2026-03-09T20:25:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:33 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T20:25:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:33 vm09 ceph-mon[54524]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-94310-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:33.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[61345]: pgmap v369: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[61345]: from='client.50275 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-94310-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-94310-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[61345]: osdmap e282: 8 total, 8 up, 8 in 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[61345]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-94310-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[51870]: pgmap v369: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[51870]: from='client.50275 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-94310-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-94310-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[51870]: osdmap e282: 8 total, 8 up, 8 in 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-09T20:25:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:33 vm05 ceph-mon[51870]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-94310-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:34 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T20:25:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:25:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:34 vm09 ceph-mon[54524]: osdmap e283: 8 total, 8 up, 8 in 2026-03-09T20:25:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:34 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:25:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:34 vm09 ceph-mon[54524]: from='client.50275 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-94310-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-94310-56"}]': finished 2026-03-09T20:25:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:34 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:25:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T20:25:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:34 vm09 ceph-mon[54524]: osdmap e284: 8 total, 8 up, 8 in 2026-03-09T20:25:34.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:34 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T20:25:34.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[61345]: osdmap e283: 8 total, 8 up, 8 in 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[61345]: from='client.50275 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-94310-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-94310-56"}]': finished 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[61345]: osdmap e284: 8 total, 8 up, 8 in 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_count","val": "8"}]': finished 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[51870]: osdmap e283: 8 total, 8 up, 8 in 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[51870]: from='client.50275 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-94310-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-94310-56"}]': finished 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[51870]: osdmap e284: 8 total, 8 up, 8 in 2026-03-09T20:25:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:34 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-09T20:25:35.753 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetRead 2026-03-09T20:25:35.753 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: hmm, no HitSet yet 2026-03-09T20:25:35.753 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: ok, hit_set contains 266:602f83fe:::foo:head 2026-03-09T20:25:35.753 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetRead (9052 ms) 2026-03-09T20:25:35.753 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetWrite 2026-03-09T20:25:35.753 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg_num = 32 2026-03-09T20:25:35.753 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 0 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 1 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 2 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 3 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 4 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 5 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 6 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 7 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 8 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 9 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 10 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 11 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 12 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 13 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 14 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 15 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 16 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 17 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 18 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 19 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 20 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 21 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 22 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 23 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 24 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 25 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 26 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 27 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 28 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 29 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 30 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 31 ls 1773087936,0 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg_num = 32 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6cac518f:::0:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:02547ec2:::1:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f905c69b:::2:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:cfc208b3:::3:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d83876eb:::4:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b29083e3:::5:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c4fdafeb:::6:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:5c6b0b28:::7:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:bd63b0f1:::8:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e960b815:::9:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:52ea6a34:::10:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:89d3ae78:::11:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:de5d7c5f:::12:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:566253c9:::13:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:62a1935d:::14:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:863748b0:::15:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3958e169:::16:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4d4dabf9:::17:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:8391935d:::18:head 2026-03-09T20:25:35.754 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:28883081:::19:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:69259c59:::20:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4bdb80b7:::21:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a11c5d71:::22:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:271af37b:::23:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:95b121be:::24:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:58d1031b:::25:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:0a050783:::26:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c709704c:::27:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:cbe56eaf:::28:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:86b4b162:::29:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:70d89383:::30:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:dd450c7c:::31:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6d5729b1:::32:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c388f3fb:::33:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:56cfea31:::34:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9dbc1bf7:::35:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:40b74ccd:::36:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4d5aaf42:::37:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:920f362c:::38:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6cc53222:::39:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9cad833f:::40:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:1ea84d41:::41:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c4480ef6:::42:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a694361e:::43:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d1bd33e9:::44:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ddc2cd5d:::45:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:2b782207:::46:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7b187fca:::47:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:90ecdf6f:::48:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a5ed95fe:::49:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ea0eaa55:::50:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f33ef17b:::51:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a0d1b2f6:::52:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:60c5229e:::53:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:edcbc575:::54:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:102cf253:::55:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:efb7fb0b:::56:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:50d0a326:::57:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d4dc5daf:::58:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3a130462:::59:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ec87ed71:::60:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d5bc9454:::61:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3ddfe313:::62:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7c2816b9:::63:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:47e00e4d:::64:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c6410c18:::65:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b48ed237:::66:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:cd63ad31:::67:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b179e92b:::68:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:0d9f741a:::69:head 2026-03-09T20:25:35.755 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6d3352ae:::70:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c6d5c19e:::71:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:bc4729c3:::72:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:77e930b9:::73:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:0abeecfd:::74:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b7c37e15:::75:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b6378398:::76:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:02bd68de:::77:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:cc795d2d:::78:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:630d4fea:::79:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e0d29ef5:::80:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:fd6f13d2:::81:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:606461d5:::82:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:eadbdc43:::83:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:8761d0bb:::84:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9ef0186f:::85:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e0d41294:::86:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:961de695:::87:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:1423148f:::88:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:633a8fa2:::89:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a8653809:::90:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3dac8b33:::91:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:35aad435:::92:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f6dcc343:::93:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:dbbdad87:::94:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:1cb48ce0:::95:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:03cd461c:::96:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:17a4ea99:::97:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9993c9a7:::98:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6394211c:::99:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:94c7ae57:::100:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6fdee5bb:::101:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9a477fd1:::102:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:eb850916:::103:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:affc56b9:::104:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b42dc814:::105:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f319f8f0:::106:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9a40b9de:::107:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:8b524f28:::108:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e3de589f:::109:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:90f90a5b:::110:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a7b4f1d7:::111:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:af51766e:::112:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b6f90bd1:::113:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e0261208:::114:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c9569ef7:::115:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:61bebe50:::116:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:fe93412b:::117:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d3d38bee:::118:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3100ba0c:::119:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d0560ada:::120:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f0ea8b35:::121:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:766f231a:::122:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a07a2582:::123:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:bd7c6b3a:::124:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:fb2ddaff:::125:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4408e1fe:::126:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ee1df7a7:::127:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c3002909:::128:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4f48ffa9:::129:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:edf38733:::130:head 2026-03-09T20:25:35.756 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c08425c0:::131:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:5f902d98:::132:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:41ea2c93:::133:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:813cee13:::134:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:0131818d:::135:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:26ba5a85:::136:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:381b8a5a:::137:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:28797e47:::138:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:bfca7f22:::139:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:36807075:::140:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:80b03975:::141:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:5c15709b:::142:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f39ea15e:::143:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ea992956:::144:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:48887b1c:::145:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9f24a9dd:::146:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:987f100b:::147:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d2dd3581:::148:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7fed1808:::149:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c80b70e9:::150:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:85ed90f9:::151:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:36428b24:::152:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d044c34a:::153:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7c18bf58:::154:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d1c21232:::155:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a7a3c575:::156:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:87da0633:::157:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d5ac3822:::158:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3f20522d:::159:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6ca26563:::160:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:532ce135:::161:head 2026-03-09T20:25:35.757 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c78863e6:::162:head 2026-03-09T20:25:35.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:35 vm09 ceph-mon[54524]: pgmap v372: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:35.776 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:35 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T20:25:35.776 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:35 vm09 ceph-mon[54524]: osdmap e285: 8 total, 8 up, 8 in 2026-03-09T20:25:35.776 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:25:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:25:35.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:35 vm05 ceph-mon[61345]: pgmap v372: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:35.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:35 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T20:25:35.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:35 vm05 ceph-mon[61345]: osdmap e285: 8 total, 8 up, 8 in 2026-03-09T20:25:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:35 vm05 ceph-mon[51870]: pgmap v372: 292 pgs: 12 creating+peering, 20 unknown, 260 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:35 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-09T20:25:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:35 vm05 ceph-mon[51870]: osdmap e285: 8 total, 8 up, 8 in 2026-03-09T20:25:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm05-94573-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T20:25:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:36.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:36 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43"}]: dispatch 2026-03-09T20:25:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:36 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43"}]: dispatch 2026-03-09T20:25:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:36 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:36 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43"}]': finished 2026-03-09T20:25:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:36 vm09 ceph-mon[54524]: osdmap e286: 8 total, 8 up, 8 in 2026-03-09T20:25:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:36 vm09 ceph-mon[54524]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:36.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:36.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm05-94573-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43"}]': finished 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[61345]: osdmap e286: 8 total, 8 up, 8 in 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[61345]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm05-94573-43","var": "pg_num","format": "json"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-43"}]': finished 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[51870]: osdmap e286: 8 total, 8 up, 8 in 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:36 vm05 ceph-mon[51870]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:37.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:37 vm09 ceph-mon[54524]: pgmap v375: 300 pgs: 6 creating+peering, 294 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.5 KiB/s wr, 7 op/s 2026-03-09T20:25:37.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:37 vm09 ceph-mon[54524]: from='client.50275 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]': finished 2026-03-09T20:25:37.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:37 vm09 ceph-mon[54524]: osdmap e287: 8 total, 8 up, 8 in 2026-03-09T20:25:37.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:37 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:37.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:37 vm05 ceph-mon[61345]: pgmap v375: 300 pgs: 6 creating+peering, 294 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.5 KiB/s wr, 7 op/s 2026-03-09T20:25:37.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:37 vm05 ceph-mon[61345]: from='client.50275 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]': finished 2026-03-09T20:25:37.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:37 vm05 ceph-mon[61345]: osdmap e287: 8 total, 8 up, 8 in 2026-03-09T20:25:37.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:37 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:37.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:37 vm05 ceph-mon[51870]: pgmap v375: 300 pgs: 6 creating+peering, 294 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.5 KiB/s wr, 7 op/s 2026-03-09T20:25:37.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:37 vm05 ceph-mon[51870]: from='client.50275 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-94310-56"}]': finished 2026-03-09T20:25:37.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:37 vm05 ceph-mon[51870]: osdmap e287: 8 total, 8 up, 8 in 2026-03-09T20:25:37.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:37 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/541904402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:38.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:38 vm09 ceph-mon[54524]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:38.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:38 vm09 ceph-mon[54524]: from='client.50275 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]': finished 2026-03-09T20:25:38.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:38 vm09 ceph-mon[54524]: osdmap e288: 8 total, 8 up, 8 in 2026-03-09T20:25:38.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:38.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:38 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:38 vm05 ceph-mon[61345]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:38 vm05 ceph-mon[61345]: from='client.50275 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]': finished 2026-03-09T20:25:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:38 vm05 ceph-mon[61345]: osdmap e288: 8 total, 8 up, 8 in 2026-03-09T20:25:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:38 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:38.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:38 vm05 ceph-mon[51870]: from='client.50275 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]: dispatch 2026-03-09T20:25:38.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:38 vm05 ceph-mon[51870]: from='client.50275 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-94310-56"}]': finished 2026-03-09T20:25:38.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:38 vm05 ceph-mon[51870]: osdmap e288: 8 total, 8 up, 8 in 2026-03-09T20:25:38.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:38.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:38 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:25:38.811 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:25:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:25:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:25:39.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[61345]: pgmap v378: 260 pgs: 260 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:39.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:39.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[61345]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:39.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:39.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[61345]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-94310-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[61345]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-94310-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[51870]: pgmap v378: 260 pgs: 260 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[51870]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[51870]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-94310-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:39 vm05 ceph-mon[51870]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-94310-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:39 vm09 ceph-mon[54524]: pgmap v378: 260 pgs: 260 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:39 vm09 ceph-mon[54524]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:39 vm09 ceph-mon[54524]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-94310-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:39 vm09 ceph-mon[54524]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-94310-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:40.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:40.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[61345]: from='client.50281 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-94310-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:40.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm05-94310-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[61345]: osdmap e289: 8 total, 8 up, 8 in 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[61345]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm05-94310-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[61345]: osdmap e290: 8 total, 8 up, 8 in 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[51870]: from='client.50281 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-94310-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm05-94310-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[51870]: osdmap e289: 8 total, 8 up, 8 in 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[51870]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm05-94310-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[51870]: osdmap e290: 8 total, 8 up, 8 in 2026-03-09T20:25:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T20:25:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:25:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:40 vm09 ceph-mon[54524]: from='client.50281 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-94310-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm05-94310-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:40 vm09 ceph-mon[54524]: osdmap e289: 8 total, 8 up, 8 in 2026-03-09T20:25:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:40 vm09 ceph-mon[54524]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm05-94310-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:25:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:25:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T20:25:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:40 vm09 ceph-mon[54524]: osdmap e290: 8 total, 8 up, 8 in 2026-03-09T20:25:41.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T20:25:41.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:41 vm05 ceph-mon[61345]: pgmap v381: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:41.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:41 vm05 ceph-mon[61345]: from='client.50281 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm05-94310-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-94310-57"}]': finished 2026-03-09T20:25:41.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:41 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T20:25:41.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:41 vm05 ceph-mon[61345]: osdmap e291: 8 total, 8 up, 8 in 2026-03-09T20:25:41.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T20:25:41.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:41 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T20:25:41.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:41 vm05 ceph-mon[51870]: pgmap v381: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:41 vm05 ceph-mon[51870]: from='client.50281 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm05-94310-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-94310-57"}]': finished 2026-03-09T20:25:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:41 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T20:25:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:41 vm05 ceph-mon[51870]: osdmap e291: 8 total, 8 up, 8 in 2026-03-09T20:25:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T20:25:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:41 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T20:25:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:41 vm09 ceph-mon[54524]: pgmap v381: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 732 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:41 vm09 ceph-mon[54524]: from='client.50281 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm05-94310-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-94310-57"}]': finished 2026-03-09T20:25:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:41 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_count","val": "3"}]': finished 2026-03-09T20:25:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:41 vm09 ceph-mon[54524]: osdmap e291: 8 total, 8 up, 8 in 2026-03-09T20:25:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T20:25:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:41 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T20:25:43.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:43 vm05 ceph-mon[61345]: pgmap v384: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:43.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:43 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T20:25:43.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:43 vm05 ceph-mon[61345]: osdmap e292: 8 total, 8 up, 8 in 2026-03-09T20:25:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:43 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:25:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:43 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:25:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:43 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:43 vm05 ceph-mon[51870]: pgmap v384: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:43 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T20:25:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:43 vm05 ceph-mon[51870]: osdmap e292: 8 total, 8 up, 8 in 2026-03-09T20:25:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:43 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:25:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:43 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:25:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:43 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:44.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:43 vm09 ceph-mon[54524]: pgmap v384: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:43 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_period","val": "3"}]': finished 2026-03-09T20:25:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:43 vm09 ceph-mon[54524]: osdmap e292: 8 total, 8 up, 8 in 2026-03-09T20:25:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:43 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:25:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:43 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:25:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:43 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:44.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:25:44.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: osdmap e293: 8 total, 8 up, 8 in 2026-03-09T20:25:44.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T20:25:44.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:44.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1, 0, 7]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1, 0, 7]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: osdmap e293: 8 total, 8 up, 8 in 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1, 0, 7]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1, 0, 7]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]: dispatch 2026-03-09T20:25:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:44 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]: dispatch 2026-03-09T20:25:45.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:25:45.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: osdmap e293: 8 total, 8 up, 8 in 2026-03-09T20:25:45.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T20:25:45.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:45.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T20:25:45.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:45.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T20:25:45.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-09T20:25:45.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1, 0, 7]}]: dispatch 2026-03-09T20:25:45.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1, 0, 7]}]: dispatch 2026-03-09T20:25:45.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]: dispatch 2026-03-09T20:25:45.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]: dispatch 2026-03-09T20:25:45.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]: dispatch 2026-03-09T20:25:45.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:44 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]: dispatch 2026-03-09T20:25:45.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:45 vm09 ceph-mon[54524]: pgmap v387: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:45 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T20:25:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:45 vm09 ceph-mon[54524]: from='client.50281 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]': finished 2026-03-09T20:25:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-09T20:25:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1, 0, 7]}]': finished 2026-03-09T20:25:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]': finished 2026-03-09T20:25:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]': finished 2026-03-09T20:25:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:45 vm09 ceph-mon[54524]: osdmap e294: 8 total, 8 up, 8 in 2026-03-09T20:25:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:45 vm09 ceph-mon[54524]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:45 vm09 ceph-mon[54524]: from='client.50281 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]': finished 2026-03-09T20:25:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:45 vm09 ceph-mon[54524]: osdmap e295: 8 total, 8 up, 8 in 2026-03-09T20:25:45.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:25:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:25:45.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[61345]: pgmap v387: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[61345]: from='client.50281 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1, 0, 7]}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[61345]: osdmap e294: 8 total, 8 up, 8 in 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[61345]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[61345]: from='client.50281 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[61345]: osdmap e295: 8 total, 8 up, 8 in 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[51870]: pgmap v387: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[51870]: from='client.50281 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-94310-57"}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1, 0, 7]}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4143182430' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[51870]: osdmap e294: 8 total, 8 up, 8 in 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[51870]: from='client.50281 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]: dispatch 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[51870]: from='client.50281 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-94310-57"}]': finished 2026-03-09T20:25:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:45 vm05 ceph-mon[51870]: osdmap e295: 8 total, 8 up, 8 in 2026-03-09T20:25:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:47.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-94310-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:47.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:47.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:47.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:47.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-94310-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-94310-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:47 vm09 ceph-mon[54524]: pgmap v390: 292 pgs: 4 peering, 288 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:25:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:47 vm09 ceph-mon[54524]: Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T20:25:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-94310-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:47 vm09 ceph-mon[54524]: osdmap e296: 8 total, 8 up, 8 in 2026-03-09T20:25:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-94310-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:47 vm09 ceph-mon[54524]: osdmap e297: 8 total, 8 up, 8 in 2026-03-09T20:25:48.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:47 vm05 ceph-mon[61345]: pgmap v390: 292 pgs: 4 peering, 288 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:25:48.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:47 vm05 ceph-mon[61345]: Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T20:25:48.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-94310-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:48.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:47 vm05 ceph-mon[61345]: osdmap e296: 8 total, 8 up, 8 in 2026-03-09T20:25:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-94310-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:47 vm05 ceph-mon[61345]: osdmap e297: 8 total, 8 up, 8 in 2026-03-09T20:25:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:47 vm05 ceph-mon[51870]: pgmap v390: 292 pgs: 4 peering, 288 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:25:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:47 vm05 ceph-mon[51870]: Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T20:25:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-94310-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:47 vm05 ceph-mon[51870]: osdmap e296: 8 total, 8 up, 8 in 2026-03-09T20:25:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-94310-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:47 vm05 ceph-mon[51870]: osdmap e297: 8 total, 8 up, 8 in 2026-03-09T20:25:48.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:25:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:25:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:25:50.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:49 vm09 ceph-mon[54524]: pgmap v393: 292 pgs: 4 peering, 288 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:25:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-94310-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-94310-58"}]': finished 2026-03-09T20:25:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:49 vm09 ceph-mon[54524]: osdmap e298: 8 total, 8 up, 8 in 2026-03-09T20:25:50.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:49 vm05 ceph-mon[51870]: pgmap v393: 292 pgs: 4 peering, 288 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:25:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-94310-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-94310-58"}]': finished 2026-03-09T20:25:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:49 vm05 ceph-mon[51870]: osdmap e298: 8 total, 8 up, 8 in 2026-03-09T20:25:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:49 vm05 ceph-mon[61345]: pgmap v393: 292 pgs: 4 peering, 288 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:25:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-94310-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-94310-58"}]': finished 2026-03-09T20:25:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:49 vm05 ceph-mon[61345]: osdmap e298: 8 total, 8 up, 8 in 2026-03-09T20:25:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:50 vm09 ceph-mon[54524]: osdmap e299: 8 total, 8 up, 8 in 2026-03-09T20:25:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:50 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:25:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:50 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:25:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:50 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:25:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:50 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:25:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:50 vm05 ceph-mon[61345]: osdmap e299: 8 total, 8 up, 8 in 2026-03-09T20:25:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:50 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:25:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:50 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:25:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:50 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:25:51.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:50 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:25:51.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:50 vm05 ceph-mon[51870]: osdmap e299: 8 total, 8 up, 8 in 2026-03-09T20:25:51.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:50 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:25:51.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:50 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:25:51.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:50 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:25:51.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:50 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:25:52.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:51 vm09 ceph-mon[54524]: pgmap v396: 300 pgs: 8 unknown, 4 peering, 288 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:52.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:51 vm09 ceph-mon[54524]: osdmap e300: 8 total, 8 up, 8 in 2026-03-09T20:25:52.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:52.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:51 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:52.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-94310-58"}]': finished 2026-03-09T20:25:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:51 vm09 ceph-mon[54524]: osdmap e301: 8 total, 8 up, 8 in 2026-03-09T20:25:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:52.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[61345]: pgmap v396: 300 pgs: 8 unknown, 4 peering, 288 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:52.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[61345]: osdmap e300: 8 total, 8 up, 8 in 2026-03-09T20:25:52.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-94310-58"}]': finished 2026-03-09T20:25:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[61345]: osdmap e301: 8 total, 8 up, 8 in 2026-03-09T20:25:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[51870]: pgmap v396: 300 pgs: 8 unknown, 4 peering, 288 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:25:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[51870]: osdmap e300: 8 total, 8 up, 8 in 2026-03-09T20:25:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-94310-58"}]': finished 2026-03-09T20:25:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[51870]: osdmap e301: 8 total, 8 up, 8 in 2026-03-09T20:25:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-94310-58"}]: dispatch 2026-03-09T20:25:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:53 vm09 ceph-mon[54524]: pgmap v399: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T20:25:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-94310-58"}]': finished 2026-03-09T20:25:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:53 vm09 ceph-mon[54524]: osdmap e302: 8 total, 8 up, 8 in 2026-03-09T20:25:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:53 vm09 ceph-mon[54524]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T20:25:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:53 vm09 ceph-mon[54524]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:53 vm09 ceph-mon[54524]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:53 vm09 ceph-mon[54524]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:54.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[61345]: pgmap v399: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-94310-58"}]': finished 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[61345]: osdmap e302: 8 total, 8 up, 8 in 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[61345]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[61345]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[61345]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[61345]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[51870]: pgmap v399: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3000576580' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-94310-58"}]': finished 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[51870]: osdmap e302: 8 total, 8 up, 8 in 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[51870]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[51870]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[51870]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:53 vm05 ceph-mon[51870]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:25:55.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:54 vm09 ceph-mon[54524]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:54 vm09 ceph-mon[54524]: osdmap e303: 8 total, 8 up, 8 in 2026-03-09T20:25:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-94310-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:54 vm09 ceph-mon[54524]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-94310-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:55.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:54 vm05 ceph-mon[61345]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:55.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:54 vm05 ceph-mon[61345]: osdmap e303: 8 total, 8 up, 8 in 2026-03-09T20:25:55.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-94310-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:55.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:54 vm05 ceph-mon[61345]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-94310-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:54 vm05 ceph-mon[51870]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:25:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:54 vm05 ceph-mon[51870]: osdmap e303: 8 total, 8 up, 8 in 2026-03-09T20:25:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-94310-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:54 vm05 ceph-mon[51870]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-94310-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:55.754 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:25:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:25:56.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:55 vm09 ceph-mon[54524]: pgmap v402: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T20:25:56.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:55 vm09 ceph-mon[54524]: osdmap e304: 8 total, 8 up, 8 in 2026-03-09T20:25:56.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:55 vm05 ceph-mon[61345]: pgmap v402: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T20:25:56.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:55 vm05 ceph-mon[61345]: osdmap e304: 8 total, 8 up, 8 in 2026-03-09T20:25:56.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:55 vm05 ceph-mon[51870]: pgmap v402: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T20:25:56.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:55 vm05 ceph-mon[51870]: osdmap e304: 8 total, 8 up, 8 in 2026-03-09T20:25:57.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:57.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[61345]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-94310-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-94310-59"}]': finished 2026-03-09T20:25:57.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[61345]: osdmap e305: 8 total, 8 up, 8 in 2026-03-09T20:25:57.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:57.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:57.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45"}]: dispatch 2026-03-09T20:25:57.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45"}]: dispatch 2026-03-09T20:25:57.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:57.159 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[51870]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-94310-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-94310-59"}]': finished 2026-03-09T20:25:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[51870]: osdmap e305: 8 total, 8 up, 8 in 2026-03-09T20:25:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45"}]: dispatch 2026-03-09T20:25:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:56 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45"}]: dispatch 2026-03-09T20:25:57.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:25:57.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:56 vm09 ceph-mon[54524]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-94310-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-94310-59"}]': finished 2026-03-09T20:25:57.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:56 vm09 ceph-mon[54524]: osdmap e305: 8 total, 8 up, 8 in 2026-03-09T20:25:57.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:57.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:56 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:25:57.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45"}]: dispatch 2026-03-09T20:25:57.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:56 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45"}]: dispatch 2026-03-09T20:25:58.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:57 vm05 ceph-mon[61345]: pgmap v405: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 711 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T20:25:58.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:57 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45"}]': finished 2026-03-09T20:25:58.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:57 vm05 ceph-mon[61345]: osdmap e306: 8 total, 8 up, 8 in 2026-03-09T20:25:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:57 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:57 vm05 ceph-mon[51870]: pgmap v405: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 711 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T20:25:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:57 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45"}]': finished 2026-03-09T20:25:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:57 vm05 ceph-mon[51870]: osdmap e306: 8 total, 8 up, 8 in 2026-03-09T20:25:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:57 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:58.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:57 vm09 ceph-mon[54524]: pgmap v405: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 711 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T20:25:58.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:57 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-45"}]': finished 2026-03-09T20:25:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:57 vm09 ceph-mon[54524]: osdmap e306: 8 total, 8 up, 8 in 2026-03-09T20:25:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:57 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:25:58.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:58 vm05 ceph-mon[61345]: osdmap e307: 8 total, 8 up, 8 in 2026-03-09T20:25:58.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:58.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:58 vm05 ceph-mon[61345]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:58 vm05 ceph-mon[51870]: osdmap e307: 8 total, 8 up, 8 in 2026-03-09T20:25:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:58 vm05 ceph-mon[51870]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:25:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:25:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:25:59.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:58 vm09 ceph-mon[54524]: osdmap e307: 8 total, 8 up, 8 in 2026-03-09T20:25:59.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:59.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:58 vm09 ceph-mon[54524]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7462ddf6:::.RoundTripAppendPP (3060 ms) 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RacingRemovePP 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RacingRemovePP (3031 ms) 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripCmpExtPP 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripCmpExtPP (3127 ms) 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripCmpExtPP2 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripCmpExtPP2 (3019 ms) 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.PoolEIOFlag 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: setting pool EIO 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: max_success 100, min_failed 101 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.PoolEIOFlag (4032 ms) 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.MultiReads 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.MultiReads (3046 ms) 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 32 tests from LibRadosAio (122227 ms total) 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 4 tests from LibRadosAioPP 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.ReadIntoBufferlist 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioPP.ReadIntoBufferlist (3166 ms) 2026-03-09T20:25:59.917 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.XattrsRoundTripPP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioPP.XattrsRoundTripPP (9072 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.RmXattrPP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioPP.RmXattrPP (15046 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.RemoveTestPP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioPP.RemoveTestPP (3090 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 4 tests from LibRadosAioPP (30374 ms total) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 1 test from LibRadosIoPP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosIoPP.XattrListPP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosIoPP.XattrListPP (3026 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 1 test from LibRadosIoPP (3026 ms total) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 20 tests from LibRadosAioEC 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleWritePP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleWritePP (13798 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.WaitForSafePP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.WaitForSafePP (7137 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP (7082 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP2 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP2 (7014 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP3 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP3 (2745 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripSparseReadPP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripSparseReadPP (7054 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripAppendPP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripAppendPP (7147 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.IsCompletePP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.IsCompletePP (7110 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.IsSafePP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.IsSafePP (7032 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.ReturnValuePP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.ReturnValuePP (7099 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.FlushPP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.FlushPP (7126 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.FlushAsyncPP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.FlushAsyncPP (7129 ms) 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripWriteFullPP 2026-03-09T20:25:59.918 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripWriteFullPP (7181 ms) 2026-03-09T20:26:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[61345]: pgmap v408: 260 pgs: 260 active+clean; 8.3 MiB data, 711 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[61345]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]': finished 2026-03-09T20:26:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[61345]: osdmap e308: 8 total, 8 up, 8 in 2026-03-09T20:26:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:26:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[61345]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:26:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:00.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[51870]: pgmap v408: 260 pgs: 260 active+clean; 8.3 MiB data, 711 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[51870]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]': finished 2026-03-09T20:26:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[51870]: osdmap e308: 8 total, 8 up, 8 in 2026-03-09T20:26:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:26:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[51870]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:26:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:25:59 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:00.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:59 vm09 ceph-mon[54524]: pgmap v408: 260 pgs: 260 active+clean; 8.3 MiB data, 711 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:00.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:59 vm09 ceph-mon[54524]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-94310-59"}]': finished 2026-03-09T20:26:00.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:59 vm09 ceph-mon[54524]: osdmap e308: 8 total, 8 up, 8 in 2026-03-09T20:26:00.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/533318424' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:26:00.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:00.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:59 vm09 ceph-mon[54524]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]: dispatch 2026-03-09T20:26:00.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:59 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:00.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:25:59 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:01.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:00 vm09 ceph-mon[54524]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]': finished 2026-03-09T20:26:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:00 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:00 vm09 ceph-mon[54524]: osdmap e309: 8 total, 8 up, 8 in 2026-03-09T20:26:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:00 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:00 vm09 ceph-mon[54524]: pgmap v411: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 711 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:01.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:00 vm05 ceph-mon[61345]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]': finished 2026-03-09T20:26:01.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:00 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:01.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:00 vm05 ceph-mon[61345]: osdmap e309: 8 total, 8 up, 8 in 2026-03-09T20:26:01.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:01.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:00 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:01.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:00 vm05 ceph-mon[61345]: pgmap v411: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 711 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:00 vm05 ceph-mon[51870]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-94310-59"}]': finished 2026-03-09T20:26:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:00 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:00 vm05 ceph-mon[51870]: osdmap e309: 8 total, 8 up, 8 in 2026-03-09T20:26:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:00 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:00 vm05 ceph-mon[51870]: pgmap v411: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 711 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:02.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:01 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:26:02.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:02.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:01 vm09 ceph-mon[54524]: osdmap e310: 8 total, 8 up, 8 in 2026-03-09T20:26:02.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:01 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:02.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/542165081' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:02.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:01 vm09 ceph-mon[54524]: from='client.49541 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:01 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:26:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:01 vm05 ceph-mon[61345]: osdmap e310: 8 total, 8 up, 8 in 2026-03-09T20:26:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:01 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/542165081' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:02.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:01 vm05 ceph-mon[61345]: from='client.49541 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:02.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:01 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:26:02.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:02.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:01 vm05 ceph-mon[51870]: osdmap e310: 8 total, 8 up, 8 in 2026-03-09T20:26:02.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:01 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:02.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/542165081' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:01 vm05 ceph-mon[51870]: from='client.49541 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:03.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:02 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-47"}]': finished 2026-03-09T20:26:03.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:02 vm09 ceph-mon[54524]: from='client.49541 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:03.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:02 vm09 ceph-mon[54524]: osdmap e311: 8 total, 8 up, 8 in 2026-03-09T20:26:03.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-47", "mode": "writeback"}]: dispatch 2026-03-09T20:26:03.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:02 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-47", "mode": "writeback"}]: dispatch 2026-03-09T20:26:03.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:02 vm09 ceph-mon[54524]: pgmap v414: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T20:26:03.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:02 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-47"}]': finished 2026-03-09T20:26:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:02 vm05 ceph-mon[61345]: from='client.49541 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:02 vm05 ceph-mon[61345]: osdmap e311: 8 total, 8 up, 8 in 2026-03-09T20:26:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-47", "mode": "writeback"}]: dispatch 2026-03-09T20:26:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:02 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-47", "mode": "writeback"}]: dispatch 2026-03-09T20:26:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:02 vm05 ceph-mon[61345]: pgmap v414: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T20:26:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:02 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-47"}]': finished 2026-03-09T20:26:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:02 vm05 ceph-mon[51870]: from='client.49541 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-94310-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:02 vm05 ceph-mon[51870]: osdmap e311: 8 total, 8 up, 8 in 2026-03-09T20:26:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-47", "mode": "writeback"}]: dispatch 2026-03-09T20:26:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:02 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-47", "mode": "writeback"}]: dispatch 2026-03-09T20:26:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:02 vm05 ceph-mon[51870]: pgmap v414: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T20:26:04.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:03 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:26:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:03 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-47", "mode": "writeback"}]': finished 2026-03-09T20:26:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:26:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:03 vm09 ceph-mon[54524]: osdmap e312: 8 total, 8 up, 8 in 2026-03-09T20:26:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:03 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:26:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm05-94310-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:04.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-47", "mode": "writeback"}]': finished 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[61345]: osdmap e312: 8 total, 8 up, 8 in 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm05-94310-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-47", "mode": "writeback"}]': finished 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[51870]: osdmap e312: 8 total, 8 up, 8 in 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm05-94310-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:05.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:26:05.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm05-94310-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:05.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:26:05.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[61345]: osdmap e313: 8 total, 8 up, 8 in 2026-03-09T20:26:05.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm05-94310-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:05.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:26:05.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[61345]: pgmap v417: 292 pgs: 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T20:26:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:26:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm05-94310-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:26:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[51870]: osdmap e313: 8 total, 8 up, 8 in 2026-03-09T20:26:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm05-94310-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:26:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:05 vm05 ceph-mon[51870]: pgmap v417: 292 pgs: 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T20:26:05.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:05 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:26:05.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm05-94310-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:05.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:26:05.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:05 vm09 ceph-mon[54524]: osdmap e313: 8 total, 8 up, 8 in 2026-03-09T20:26:05.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm05-94310-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:05.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:05 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:26:05.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:05 vm09 ceph-mon[54524]: pgmap v417: 292 pgs: 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T20:26:05.772 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:26:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:26:06.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[61345]: osdmap e314: 8 total, 8 up, 8 in 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm05-94310-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm05-94310-61"}]': finished 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[61345]: osdmap e315: 8 total, 8 up, 8 in 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[51870]: osdmap e314: 8 total, 8 up, 8 in 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm05-94310-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm05-94310-61"}]': finished 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[51870]: osdmap e315: 8 total, 8 up, 8 in 2026-03-09T20:26:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:06 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:26:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:06 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:26:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:26:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:06 vm09 ceph-mon[54524]: osdmap e314: 8 total, 8 up, 8 in 2026-03-09T20:26:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:06 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:26:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:06 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:06 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:26:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm05-94310-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm05-94310-61"}]': finished 2026-03-09T20:26:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:06 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:26:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:26:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:06 vm09 ceph-mon[54524]: osdmap e315: 8 total, 8 up, 8 in 2026-03-09T20:26:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:06 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:26:07.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:07 vm05 ceph-mon[61345]: pgmap v420: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:07.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:07 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T20:26:07.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T20:26:07.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:07 vm05 ceph-mon[61345]: osdmap e316: 8 total, 8 up, 8 in 2026-03-09T20:26:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:07 vm05 ceph-mon[51870]: pgmap v420: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:07 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T20:26:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T20:26:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:07 vm05 ceph-mon[51870]: osdmap e316: 8 total, 8 up, 8 in 2026-03-09T20:26:07.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:07 vm09 ceph-mon[54524]: pgmap v420: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:07 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T20:26:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T20:26:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:07 vm09 ceph-mon[54524]: osdmap e316: 8 total, 8 up, 8 in 2026-03-09T20:26:08.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:08 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T20:26:08.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:08 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T20:26:08.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T20:26:08.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:08 vm05 ceph-mon[61345]: osdmap e317: 8 total, 8 up, 8 in 2026-03-09T20:26:08.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:08 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T20:26:08.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T20:26:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T20:26:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T20:26:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:08 vm05 ceph-mon[51870]: osdmap e317: 8 total, 8 up, 8 in 2026-03-09T20:26:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T20:26:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T20:26:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T20:26:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T20:26:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:08 vm09 ceph-mon[54524]: osdmap e317: 8 total, 8 up, 8 in 2026-03-09T20:26:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T20:26:08.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:08.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:26:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:26:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:26:09.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:09 vm05 ceph-mon[61345]: pgmap v423: 292 pgs: 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:09.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:09 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T20:26:09.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-94310-61"}]': finished 2026-03-09T20:26:09.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:09 vm05 ceph-mon[61345]: osdmap e318: 8 total, 8 up, 8 in 2026-03-09T20:26:09.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:09 vm05 ceph-mon[51870]: pgmap v423: 292 pgs: 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:09 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T20:26:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-94310-61"}]': finished 2026-03-09T20:26:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:09 vm05 ceph-mon[51870]: osdmap e318: 8 total, 8 up, 8 in 2026-03-09T20:26:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:09 vm09 ceph-mon[54524]: pgmap v423: 292 pgs: 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:09 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T20:26:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-94310-61"}]': finished 2026-03-09T20:26:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:09 vm09 ceph-mon[54524]: osdmap e318: 8 total, 8 up, 8 in 2026-03-09T20:26:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-94310-61"}]: dispatch 2026-03-09T20:26:10.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:10.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:10.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-94310-61"}]': finished 2026-03-09T20:26:10.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:26:10.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[61345]: osdmap e319: 8 total, 8 up, 8 in 2026-03-09T20:26:10.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:10.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[61345]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[61345]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-94310-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[61345]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-94310-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-94310-61"}]': finished 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[51870]: osdmap e319: 8 total, 8 up, 8 in 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[51870]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[51870]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-94310-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:10 vm05 ceph-mon[51870]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-94310-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:10 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3285842174' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-94310-61"}]': finished 2026-03-09T20:26:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:10 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:26:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:10 vm09 ceph-mon[54524]: osdmap e319: 8 total, 8 up, 8 in 2026-03-09T20:26:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:10 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:10 vm09 ceph-mon[54524]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:10 vm09 ceph-mon[54524]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-94310-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:10 vm09 ceph-mon[54524]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-94310-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:11.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:11 vm05 ceph-mon[61345]: pgmap v426: 292 pgs: 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:11.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:11 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]': finished 2026-03-09T20:26:11.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:11 vm05 ceph-mon[61345]: from='client.50305 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-94310-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:11.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:11 vm05 ceph-mon[61345]: osdmap e320: 8 total, 8 up, 8 in 2026-03-09T20:26:11.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-94310-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:11.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:11 vm05 ceph-mon[61345]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-94310-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:11 vm05 ceph-mon[51870]: pgmap v426: 292 pgs: 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:11 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]': finished 2026-03-09T20:26:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:11 vm05 ceph-mon[51870]: from='client.50305 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-94310-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:11 vm05 ceph-mon[51870]: osdmap e320: 8 total, 8 up, 8 in 2026-03-09T20:26:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-94310-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:11 vm05 ceph-mon[51870]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-94310-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:11 vm09 ceph-mon[54524]: pgmap v426: 292 pgs: 292 active+clean; 8.3 MiB data, 716 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:11 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]': finished 2026-03-09T20:26:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:11 vm09 ceph-mon[54524]: from='client.50305 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-94310-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:11 vm09 ceph-mon[54524]: osdmap e320: 8 total, 8 up, 8 in 2026-03-09T20:26:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-94310-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:11.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:11 vm09 ceph-mon[54524]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-94310-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:12.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:12.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:12 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:12 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:12 vm05 ceph-mon[61345]: osdmap e321: 8 total, 8 up, 8 in 2026-03-09T20:26:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:12 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:12 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:12 vm05 ceph-mon[51870]: osdmap e321: 8 total, 8 up, 8 in 2026-03-09T20:26:12.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:12.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:12 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:12.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:12.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:12 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-47"}]: dispatch 2026-03-09T20:26:12.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:12 vm09 ceph-mon[54524]: osdmap e321: 8 total, 8 up, 8 in 2026-03-09T20:26:13.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:13 vm05 ceph-mon[61345]: pgmap v429: 260 pgs: 260 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T20:26:13.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:13 vm05 ceph-mon[61345]: from='client.50305 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-94310-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-94310-62"}]': finished 2026-03-09T20:26:13.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:13 vm05 ceph-mon[61345]: osdmap e322: 8 total, 8 up, 8 in 2026-03-09T20:26:13.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:13.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:13 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:13 vm05 ceph-mon[51870]: pgmap v429: 260 pgs: 260 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T20:26:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:13 vm05 ceph-mon[51870]: from='client.50305 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-94310-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-94310-62"}]': finished 2026-03-09T20:26:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:13 vm05 ceph-mon[51870]: osdmap e322: 8 total, 8 up, 8 in 2026-03-09T20:26:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:13 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:13 vm09 ceph-mon[54524]: pgmap v429: 260 pgs: 260 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T20:26:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:13 vm09 ceph-mon[54524]: from='client.50305 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-94310-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-94310-62"}]': finished 2026-03-09T20:26:13.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:13 vm09 ceph-mon[54524]: osdmap e322: 8 total, 8 up, 8 in 2026-03-09T20:26:13.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:13.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:13 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:15.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:15 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:15.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:15 vm05 ceph-mon[61345]: osdmap e323: 8 total, 8 up, 8 in 2026-03-09T20:26:15.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:15.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:15 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:15.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:15 vm05 ceph-mon[61345]: pgmap v432: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T20:26:15.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:15 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:15 vm05 ceph-mon[51870]: osdmap e323: 8 total, 8 up, 8 in 2026-03-09T20:26:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:15 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:15 vm05 ceph-mon[51870]: pgmap v432: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T20:26:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:15.457 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:15 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:15.457 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:15 vm09 ceph-mon[54524]: osdmap e323: 8 total, 8 up, 8 in 2026-03-09T20:26:15.457 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:15.457 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:15 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:15.457 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:15 vm09 ceph-mon[54524]: pgmap v432: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 717 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T20:26:15.457 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:15.772 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:26:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:26:16.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:16.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: osdmap e324: 8 total, 8 up, 8 in 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-49"}]': finished 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: from='client.50305 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]': finished 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: osdmap e325: 8 total, 8 up, 8 in 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-49", "mode": "readproxy"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-49", "mode": "readproxy"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: osdmap e324: 8 total, 8 up, 8 in 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-49"}]': finished 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: from='client.50305 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]': finished 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: osdmap e325: 8 total, 8 up, 8 in 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-49", "mode": "readproxy"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-49", "mode": "readproxy"}]: dispatch 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: osdmap e324: 8 total, 8 up, 8 in 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-49"}]': finished 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: from='client.50305 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-94310-62"}]': finished 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: osdmap e325: 8 total, 8 up, 8 in 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3641217190' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-49", "mode": "readproxy"}]: dispatch 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: from='client.50305 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]: dispatch 2026-03-09T20:26:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-49", "mode": "readproxy"}]: dispatch 2026-03-09T20:26:17.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:17 vm05 ceph-mon[61345]: pgmap v435: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:26:17.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:17 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:26:17.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:17 vm05 ceph-mon[61345]: from='client.50305 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]': finished 2026-03-09T20:26:17.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:17 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-49", "mode": "readproxy"}]': finished 2026-03-09T20:26:17.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:17 vm05 ceph-mon[61345]: osdmap e326: 8 total, 8 up, 8 in 2026-03-09T20:26:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:17 vm05 ceph-mon[51870]: pgmap v435: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:26:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:17 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:26:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:17 vm05 ceph-mon[51870]: from='client.50305 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]': finished 2026-03-09T20:26:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:17 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-49", "mode": "readproxy"}]': finished 2026-03-09T20:26:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:17 vm05 ceph-mon[51870]: osdmap e326: 8 total, 8 up, 8 in 2026-03-09T20:26:17.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:17 vm09 ceph-mon[54524]: pgmap v435: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:26:17.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:17 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:26:17.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:17 vm09 ceph-mon[54524]: from='client.50305 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-94310-62"}]': finished 2026-03-09T20:26:17.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:17 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-49", "mode": "readproxy"}]': finished 2026-03-09T20:26:17.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:17 vm09 ceph-mon[54524]: osdmap e326: 8 total, 8 up, 8 in 2026-03-09T20:26:18.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:18.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:18.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm05-94310-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:18.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:18.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:18.409 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm05-94310-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:18.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:18.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:18.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm05-94310-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:18.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:26:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:26:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:26:19.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm05-94310-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:19.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:19 vm09 ceph-mon[54524]: osdmap e327: 8 total, 8 up, 8 in 2026-03-09T20:26:19.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm05-94310-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:19.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:19 vm09 ceph-mon[54524]: pgmap v438: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:26:19.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm05-94310-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:19.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:19 vm05 ceph-mon[61345]: osdmap e327: 8 total, 8 up, 8 in 2026-03-09T20:26:19.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm05-94310-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:19.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:19 vm05 ceph-mon[61345]: pgmap v438: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:26:19.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm05-94310-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:19.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:19 vm05 ceph-mon[51870]: osdmap e327: 8 total, 8 up, 8 in 2026-03-09T20:26:19.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm05-94310-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:19.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:19 vm05 ceph-mon[51870]: pgmap v438: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:26:20.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:20 vm09 ceph-mon[54524]: osdmap e328: 8 total, 8 up, 8 in 2026-03-09T20:26:20.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:20 vm05 ceph-mon[61345]: osdmap e328: 8 total, 8 up, 8 in 2026-03-09T20:26:20.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:20 vm05 ceph-mon[51870]: osdmap e328: 8 total, 8 up, 8 in 2026-03-09T20:26:21.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm05-94310-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm05-94310-63"}]': finished 2026-03-09T20:26:21.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:21 vm09 ceph-mon[54524]: osdmap e329: 8 total, 8 up, 8 in 2026-03-09T20:26:21.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:21 vm09 ceph-mon[54524]: pgmap v441: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:21.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:21 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:21.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm05-94310-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm05-94310-63"}]': finished 2026-03-09T20:26:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:21 vm05 ceph-mon[61345]: osdmap e329: 8 total, 8 up, 8 in 2026-03-09T20:26:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:21 vm05 ceph-mon[61345]: pgmap v441: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:21 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm05-94310-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm05-94310-63"}]': finished 2026-03-09T20:26:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:21 vm05 ceph-mon[51870]: osdmap e329: 8 total, 8 up, 8 in 2026-03-09T20:26:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:21 vm05 ceph-mon[51870]: pgmap v441: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:21 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:22.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:22 vm09 ceph-mon[54524]: osdmap e330: 8 total, 8 up, 8 in 2026-03-09T20:26:22.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:22 vm05 ceph-mon[61345]: osdmap e330: 8 total, 8 up, 8 in 2026-03-09T20:26:22.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:22 vm05 ceph-mon[51870]: osdmap e330: 8 total, 8 up, 8 in 2026-03-09T20:26:23.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:23 vm09 ceph-mon[54524]: osdmap e331: 8 total, 8 up, 8 in 2026-03-09T20:26:23.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:23 vm09 ceph-mon[54524]: pgmap v444: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T20:26:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-94310-63"}]': finished 2026-03-09T20:26:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:23 vm09 ceph-mon[54524]: osdmap e332: 8 total, 8 up, 8 in 2026-03-09T20:26:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:23.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:23 vm05 ceph-mon[61345]: osdmap e331: 8 total, 8 up, 8 in 2026-03-09T20:26:23.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:23.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:23 vm05 ceph-mon[61345]: pgmap v444: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T20:26:23.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-94310-63"}]': finished 2026-03-09T20:26:23.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:23 vm05 ceph-mon[61345]: osdmap e332: 8 total, 8 up, 8 in 2026-03-09T20:26:23.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:23 vm05 ceph-mon[51870]: osdmap e331: 8 total, 8 up, 8 in 2026-03-09T20:26:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:23 vm05 ceph-mon[51870]: pgmap v444: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T20:26:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-94310-63"}]': finished 2026-03-09T20:26:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:23 vm05 ceph-mon[51870]: osdmap e332: 8 total, 8 up, 8 in 2026-03-09T20:26:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-94310-63"}]: dispatch 2026-03-09T20:26:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-94310-63"}]': finished 2026-03-09T20:26:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:25 vm09 ceph-mon[54524]: osdmap e333: 8 total, 8 up, 8 in 2026-03-09T20:26:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:25 vm09 ceph-mon[54524]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:25 vm09 ceph-mon[54524]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-94310-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:25 vm09 ceph-mon[54524]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-94310-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:25 vm09 ceph-mon[54524]: pgmap v447: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T20:26:25.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:26:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:26:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-94310-63"}]': finished 2026-03-09T20:26:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[61345]: osdmap e333: 8 total, 8 up, 8 in 2026-03-09T20:26:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[61345]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:25.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[61345]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:25.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-94310-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:25.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[61345]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-94310-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:25.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[61345]: pgmap v447: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T20:26:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1156946780' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-94310-63"}]': finished 2026-03-09T20:26:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[51870]: osdmap e333: 8 total, 8 up, 8 in 2026-03-09T20:26:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[51870]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[51870]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-94310-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[51870]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-94310-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:25 vm05 ceph-mon[51870]: pgmap v447: 292 pgs: 292 active+clean; 8.3 MiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T20:26:26.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:26 vm09 ceph-mon[54524]: from='client.49562 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-94310-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:26 vm09 ceph-mon[54524]: osdmap e334: 8 total, 8 up, 8 in 2026-03-09T20:26:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-94310-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:26 vm09 ceph-mon[54524]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-94310-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:26.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:26 vm05 ceph-mon[61345]: from='client.49562 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-94310-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:26 vm05 ceph-mon[61345]: osdmap e334: 8 total, 8 up, 8 in 2026-03-09T20:26:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-94310-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:26 vm05 ceph-mon[61345]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-94310-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:26 vm05 ceph-mon[51870]: from='client.49562 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-94310-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:26 vm05 ceph-mon[51870]: osdmap e334: 8 total, 8 up, 8 in 2026-03-09T20:26:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-94310-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:26 vm05 ceph-mon[51870]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-94310-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:27.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:27 vm09 ceph-mon[54524]: osdmap e335: 8 total, 8 up, 8 in 2026-03-09T20:26:27.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:27 vm09 ceph-mon[54524]: pgmap v450: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:27.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:27 vm09 ceph-mon[54524]: from='client.49562 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-94310-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-94310-64"}]': finished 2026-03-09T20:26:27.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:27 vm09 ceph-mon[54524]: osdmap e336: 8 total, 8 up, 8 in 2026-03-09T20:26:27.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:27 vm05 ceph-mon[61345]: osdmap e335: 8 total, 8 up, 8 in 2026-03-09T20:26:27.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:27 vm05 ceph-mon[61345]: pgmap v450: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:27.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:27 vm05 ceph-mon[61345]: from='client.49562 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-94310-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-94310-64"}]': finished 2026-03-09T20:26:27.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:27 vm05 ceph-mon[61345]: osdmap e336: 8 total, 8 up, 8 in 2026-03-09T20:26:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:27 vm05 ceph-mon[51870]: osdmap e335: 8 total, 8 up, 8 in 2026-03-09T20:26:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:27 vm05 ceph-mon[51870]: pgmap v450: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:27 vm05 ceph-mon[51870]: from='client.49562 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-94310-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-94310-64"}]': finished 2026-03-09T20:26:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:27 vm05 ceph-mon[51870]: osdmap e336: 8 total, 8 up, 8 in 2026-03-09T20:26:28.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:28.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:28 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:28.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:28.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:28 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:28.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:28.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:28 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:28.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:26:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:26:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:26:29.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:29 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:26:29.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:29 vm05 ceph-mon[61345]: osdmap e337: 8 total, 8 up, 8 in 2026-03-09T20:26:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:29 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:29 vm05 ceph-mon[61345]: pgmap v453: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:29 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:26:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:29 vm05 ceph-mon[51870]: osdmap e337: 8 total, 8 up, 8 in 2026-03-09T20:26:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:29 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:29 vm05 ceph-mon[51870]: pgmap v453: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:29 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:26:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:29 vm09 ceph-mon[54524]: osdmap e337: 8 total, 8 up, 8 in 2026-03-09T20:26:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:29 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:29 vm09 ceph-mon[54524]: pgmap v453: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripWriteFullPP2163:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:5d165639:::164:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f43765fc:::165:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b4c720e9:::166:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e694b040:::167:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:afa38db2:::168:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:77ba9f53:::169:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:87495034:::170:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7c96bf0e:::171:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:dbe346cc:::172:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e943ec24:::173:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f97a9c0c:::174:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6f26e74d:::175:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4f95e106:::176:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:0e6f2f8f:::177:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:05db05f1:::178:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:38a78d66:::179:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d095610b:::180:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a1a9d709:::181:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:1e5d39db:::182:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f7df4fb9:::183:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:03a7f161:::184:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ba70721e:::185:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:28e5662d:::186:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:973d52de:::187:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4303eb1c:::188:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b990b48e:::189:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:29b8165b:::190:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3547f197:::191:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7e260936:::192:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:1abec7b1:::193:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:10fdda93:::194:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:15817eea:::195:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:770bab57:::196:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ed9e13e7:::197:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:71471a8f:::198:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:10fb1d02:::199:head 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetWrite (8108 ms) 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetTrim 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773087945,0 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: first is 1773087945 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773087945,0 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773087945,0 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773087945,1773087947,1773087948,0 2026-03-09T20:26:30.490 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773087945,1773087947,1773087948,0 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773087945,1773087947,1773087948,0 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773087945,1773087947,1773087948,1773087950,1773087951,0 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773087945,1773087947,1773087948,1773087950,1773087951,0 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773087945,1773087947,1773087948,1773087950,1773087951,0 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773087945,1773087947,1773087948,1773087950,1773087951,1773087953,1773087954,0 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773087945,1773087947,1773087948,1773087950,1773087951,1773087953,1773087954,0 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773087945,1773087947,1773087948,1773087950,1773087951,1773087953,1773087954,0 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773087948,1773087950,1773087951,1773087953,1773087954,1773087956,1773087957,0 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: first now 1773087948, trimmed 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetTrim (20320 ms) 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteOn2ndRead 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: foo0 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: verifying foo0 is eventually promoted 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteOn2ndRead (14316 ms) 2026-03-09T20:26:30.491 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ProxyRead 2026-03-09T20:26:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:30 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:26:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:30 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:30 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]': finished 2026-03-09T20:26:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:30 vm09 ceph-mon[54524]: osdmap e338: 8 total, 8 up, 8 in 2026-03-09T20:26:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:30 vm09 ceph-mon[54524]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:30 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:30 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]': finished 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[61345]: osdmap e338: 8 total, 8 up, 8 in 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[61345]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]': finished 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[51870]: osdmap e338: 8 total, 8 up, 8 in 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[51870]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-49"}]: dispatch 2026-03-09T20:26:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:31.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:31 vm09 ceph-mon[54524]: pgmap v455: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:31 vm09 ceph-mon[54524]: from='client.49562 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]': finished 2026-03-09T20:26:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:31 vm09 ceph-mon[54524]: osdmap e339: 8 total, 8 up, 8 in 2026-03-09T20:26:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:31 vm09 ceph-mon[54524]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:31 vm09 ceph-mon[54524]: from='client.49562 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]': finished 2026-03-09T20:26:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:31 vm09 ceph-mon[54524]: osdmap e340: 8 total, 8 up, 8 in 2026-03-09T20:26:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:31 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:31.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[61345]: pgmap v455: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:31.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[61345]: from='client.49562 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]': finished 2026-03-09T20:26:31.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[61345]: osdmap e339: 8 total, 8 up, 8 in 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[61345]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[61345]: from='client.49562 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]': finished 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[61345]: osdmap e340: 8 total, 8 up, 8 in 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[51870]: pgmap v455: 292 pgs: 292 active+clean; 8.3 MiB data, 719 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[51870]: from='client.49562 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-94310-64"}]': finished 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/347709926' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[51870]: osdmap e339: 8 total, 8 up, 8 in 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[51870]: from='client.49562 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]: dispatch 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[51870]: from='client.49562 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-94310-64"}]': finished 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[51870]: osdmap e340: 8 total, 8 up, 8 in 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:31 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:32 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:32 vm09 ceph-mon[54524]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:32 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:32 vm09 ceph-mon[54524]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:32 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-94310-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:32.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:32 vm09 ceph-mon[54524]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-94310-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:32.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:32 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:32.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:32 vm09 ceph-mon[54524]: from='client.50320 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-94310-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:32.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:32 vm09 ceph-mon[54524]: osdmap e341: 8 total, 8 up, 8 in 2026-03-09T20:26:32.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:32 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm05-94310-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:32 vm09 ceph-mon[54524]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm05-94310-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[61345]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[61345]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-94310-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[61345]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-94310-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[61345]: from='client.50320 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-94310-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[61345]: osdmap e341: 8 total, 8 up, 8 in 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm05-94310-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[61345]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm05-94310-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[51870]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[51870]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-94310-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[51870]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-94310-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[51870]: from='client.50320 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-94310-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[51870]: osdmap e341: 8 total, 8 up, 8 in 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm05-94310-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:32 vm05 ceph-mon[51870]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm05-94310-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:33.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[61345]: pgmap v458: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:33.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:33.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:26:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[61345]: osdmap e342: 8 total, 8 up, 8 in 2026-03-09T20:26:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[51870]: pgmap v458: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:26:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[51870]: osdmap e342: 8 total, 8 up, 8 in 2026-03-09T20:26:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:33 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:33 vm09 ceph-mon[54524]: pgmap v458: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:34.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:33 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:26:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:33 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:26:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:33 vm09 ceph-mon[54524]: osdmap e342: 8 total, 8 up, 8 in 2026-03-09T20:26:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:33 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:35.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:35 vm09 ceph-mon[54524]: pgmap v461: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:35.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:35 vm09 ceph-mon[54524]: from='client.50320 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm05-94310-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-94310-65"}]': finished 2026-03-09T20:26:35.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:35 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-51"}]': finished 2026-03-09T20:26:35.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:35 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-51", "mode": "writeback"}]: dispatch 2026-03-09T20:26:35.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:35 vm09 ceph-mon[54524]: osdmap e343: 8 total, 8 up, 8 in 2026-03-09T20:26:35.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:35 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-51", "mode": "writeback"}]: dispatch 2026-03-09T20:26:35.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:35 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:35.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:26:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:26:35.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[61345]: pgmap v461: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[61345]: from='client.50320 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm05-94310-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-94310-65"}]': finished 2026-03-09T20:26:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-51"}]': finished 2026-03-09T20:26:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-51", "mode": "writeback"}]: dispatch 2026-03-09T20:26:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[61345]: osdmap e343: 8 total, 8 up, 8 in 2026-03-09T20:26:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-51", "mode": "writeback"}]: dispatch 2026-03-09T20:26:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[51870]: pgmap v461: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 720 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:26:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[51870]: from='client.50320 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm05-94310-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-94310-65"}]': finished 2026-03-09T20:26:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-51"}]': finished 2026-03-09T20:26:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-51", "mode": "writeback"}]: dispatch 2026-03-09T20:26:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[51870]: osdmap e343: 8 total, 8 up, 8 in 2026-03-09T20:26:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-51", "mode": "writeback"}]: dispatch 2026-03-09T20:26:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:35 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:36.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-51", "mode": "writeback"}]': finished 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[61345]: osdmap e344: 8 total, 8 up, 8 in 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[61345]: osdmap e345: 8 total, 8 up, 8 in 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[61345]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-51", "mode": "writeback"}]': finished 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[51870]: osdmap e344: 8 total, 8 up, 8 in 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[51870]: osdmap e345: 8 total, 8 up, 8 in 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:36 vm05 ceph-mon[51870]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:37.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:37.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:36 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:26:37.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:36 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-51", "mode": "writeback"}]': finished 2026-03-09T20:26:37.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:36 vm09 ceph-mon[54524]: osdmap e344: 8 total, 8 up, 8 in 2026-03-09T20:26:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:26:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:36 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:26:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:36 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:26:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:36 vm09 ceph-mon[54524]: osdmap e345: 8 total, 8 up, 8 in 2026-03-09T20:26:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:26:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:36 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:26:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:36 vm09 ceph-mon[54524]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:37.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[61345]: pgmap v464: 300 pgs: 1 creating+activating, 4 creating+peering, 295 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[61345]: from='client.50320 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]': finished 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[61345]: osdmap e346: 8 total, 8 up, 8 in 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[61345]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[51870]: pgmap v464: 300 pgs: 1 creating+activating, 4 creating+peering, 295 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[51870]: from='client.50320 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]': finished 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[51870]: osdmap e346: 8 total, 8 up, 8 in 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[51870]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:37 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:26:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:37 vm09 ceph-mon[54524]: pgmap v464: 300 pgs: 1 creating+activating, 4 creating+peering, 295 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T20:26:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:37 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:26:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:37 vm09 ceph-mon[54524]: from='client.50320 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-94310-65"}]': finished 2026-03-09T20:26:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:37 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3497587485' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:37 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:26:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:37 vm09 ceph-mon[54524]: osdmap e346: 8 total, 8 up, 8 in 2026-03-09T20:26:38.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:37 vm09 ceph-mon[54524]: from='client.50320 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]: dispatch 2026-03-09T20:26:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:37 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[61345]: from='client.50320 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]': finished 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[61345]: osdmap e347: 8 total, 8 up, 8 in 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[61345]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[61345]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-94310-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[61345]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-94310-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[51870]: from='client.50320 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]': finished 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[51870]: osdmap e347: 8 total, 8 up, 8 in 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:26:38.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:38.812 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[51870]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:38.812 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:38.812 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[51870]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:38.812 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-94310-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:38.812 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:38 vm05 ceph-mon[51870]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-94310-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:38.812 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:26:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:26:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:26:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:38 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:26:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:38 vm09 ceph-mon[54524]: from='client.50320 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-94310-65"}]': finished 2026-03-09T20:26:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:38 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:26:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:38 vm09 ceph-mon[54524]: osdmap e347: 8 total, 8 up, 8 in 2026-03-09T20:26:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:26:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:38 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:26:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:38 vm09 ceph-mon[54524]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:38 vm09 ceph-mon[54524]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-94310-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:38 vm09 ceph-mon[54524]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-94310-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[61345]: pgmap v467: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[61345]: from='client.49571 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-94310-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[61345]: osdmap e348: 8 total, 8 up, 8 in 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-94310-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[61345]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-94310-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[51870]: pgmap v467: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[51870]: from='client.49571 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-94310-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[51870]: osdmap e348: 8 total, 8 up, 8 in 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-94310-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[51870]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-94310-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:39 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T20:26:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:39 vm09 ceph-mon[54524]: pgmap v467: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-09T20:26:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:39 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T20:26:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:39 vm09 ceph-mon[54524]: from='client.49571 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-94310-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:26:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:39 vm09 ceph-mon[54524]: osdmap e348: 8 total, 8 up, 8 in 2026-03-09T20:26:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T20:26:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-94310-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:39 vm09 ceph-mon[54524]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-94310-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:39 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T20:26:41.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:41 vm05 ceph-mon[61345]: pgmap v470: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:41 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T20:26:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:41 vm05 ceph-mon[61345]: osdmap e349: 8 total, 8 up, 8 in 2026-03-09T20:26:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:41 vm05 ceph-mon[51870]: pgmap v470: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:41 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T20:26:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:41 vm05 ceph-mon[51870]: osdmap e349: 8 total, 8 up, 8 in 2026-03-09T20:26:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:41 vm09 ceph-mon[54524]: pgmap v470: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:26:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:41 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-51","var": "target_max_objects","val": "1"}]': finished 2026-03-09T20:26:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:41 vm09 ceph-mon[54524]: osdmap e349: 8 total, 8 up, 8 in 2026-03-09T20:26:42.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:42 vm05 ceph-mon[61345]: from='client.49571 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-94310-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-94310-66"}]': finished 2026-03-09T20:26:42.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:42 vm05 ceph-mon[61345]: osdmap e350: 8 total, 8 up, 8 in 2026-03-09T20:26:42.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:42 vm05 ceph-mon[51870]: from='client.49571 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-94310-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-94310-66"}]': finished 2026-03-09T20:26:42.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:42 vm05 ceph-mon[51870]: osdmap e350: 8 total, 8 up, 8 in 2026-03-09T20:26:43.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:42 vm09 ceph-mon[54524]: from='client.49571 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-94310-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-94310-66"}]': finished 2026-03-09T20:26:43.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:42 vm09 ceph-mon[54524]: osdmap e350: 8 total, 8 up, 8 in 2026-03-09T20:26:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:43 vm05 ceph-mon[61345]: pgmap v473: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:26:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:43 vm05 ceph-mon[61345]: Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T20:26:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:43 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:43 vm05 ceph-mon[61345]: osdmap e351: 8 total, 8 up, 8 in 2026-03-09T20:26:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:43 vm05 ceph-mon[51870]: pgmap v473: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:26:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:43 vm05 ceph-mon[51870]: Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T20:26:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:43 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:43 vm05 ceph-mon[51870]: osdmap e351: 8 total, 8 up, 8 in 2026-03-09T20:26:44.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:43 vm09 ceph-mon[54524]: pgmap v473: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:26:44.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:43 vm09 ceph-mon[54524]: Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T20:26:44.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:43 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:44.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:43 vm09 ceph-mon[54524]: osdmap e351: 8 total, 8 up, 8 in 2026-03-09T20:26:44.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:44 vm05 ceph-mon[61345]: osdmap e352: 8 total, 8 up, 8 in 2026-03-09T20:26:44.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:44.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:44 vm05 ceph-mon[61345]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:44 vm05 ceph-mon[51870]: osdmap e352: 8 total, 8 up, 8 in 2026-03-09T20:26:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:44 vm05 ceph-mon[51870]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:45.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:44 vm09 ceph-mon[54524]: osdmap e352: 8 total, 8 up, 8 in 2026-03-09T20:26:45.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:45.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:44 vm09 ceph-mon[54524]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:45.555 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ 2026-03-09T20:26:45.555 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripWriteFullPP2 (3064 ms) 2026-03-09T20:26:45.555 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleStatPP 2026-03-09T20:26:45.555 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleStatPP (7120 ms) 2026-03-09T20:26:45.555 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleStatPPNS 2026-03-09T20:26:45.555 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleStatPPNS (7027 ms) 2026-03-09T20:26:45.555 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.StatRemovePP 2026-03-09T20:26:45.555 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.StatRemovePP (7091 ms) 2026-03-09T20:26:45.555 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.ExecuteClassPP 2026-03-09T20:26:45.555 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.ExecuteClassPP (7274 ms) 2026-03-09T20:26:45.555 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.OmapPP 2026-03-09T20:26:45.555 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.OmapPP (7021 ms) 2026-03-09T20:26:45.555 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.MultiWritePP 2026-03-09T20:26:45.556 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.MultiWritePP (7037 ms) 2026-03-09T20:26:45.556 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 20 tests from LibRadosAioEC (140289 ms total) 2026-03-09T20:26:45.556 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: 2026-03-09T20:26:45.556 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] Global test environment tear-down 2026-03-09T20:26:45.556 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [==========] 57 tests from 4 test suites ran. (295916 ms total) 2026-03-09T20:26:45.556 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ PASSED ] 57 tests. 2026-03-09T20:26:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:45 vm09 ceph-mon[54524]: pgmap v476: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:26:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:45 vm09 ceph-mon[54524]: from='client.49571 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]': finished 2026-03-09T20:26:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:45 vm09 ceph-mon[54524]: osdmap e353: 8 total, 8 up, 8 in 2026-03-09T20:26:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:45 vm09 ceph-mon[54524]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:26:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:45 vm09 ceph-mon[54524]: from='client.49571 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]': finished 2026-03-09T20:26:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:45 vm09 ceph-mon[54524]: osdmap e354: 8 total, 8 up, 8 in 2026-03-09T20:26:45.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:26:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:26:45.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[61345]: pgmap v476: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:26:45.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[61345]: from='client.49571 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]': finished 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[61345]: osdmap e353: 8 total, 8 up, 8 in 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[61345]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[61345]: from='client.49571 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]': finished 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[61345]: osdmap e354: 8 total, 8 up, 8 in 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[51870]: pgmap v476: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[51870]: from='client.49571 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-94310-66"}]': finished 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1165061664' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[51870]: osdmap e353: 8 total, 8 up, 8 in 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[51870]: from='client.49571 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]: dispatch 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[51870]: from='client.49571 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-94310-66"}]': finished 2026-03-09T20:26:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:45 vm05 ceph-mon[51870]: osdmap e354: 8 total, 8 up, 8 in 2026-03-09T20:26:46.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:47.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:47.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:47 vm05 ceph-mon[61345]: pgmap v479: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:26:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:47 vm05 ceph-mon[51870]: pgmap v479: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:26:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:47 vm09 ceph-mon[54524]: pgmap v479: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:26:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:26:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:26:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:26:49.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:49 vm05 ceph-mon[61345]: pgmap v480: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 893 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:26:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:49 vm05 ceph-mon[51870]: pgmap v480: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 893 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:26:50.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:49 vm09 ceph-mon[54524]: pgmap v480: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 893 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:26:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:50 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:50 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:50 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:50 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:50 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:26:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:50 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:50 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:50 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:50 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:50 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:26:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:50 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:51.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:50 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:50 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:50 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:50 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:26:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:51 vm05 ceph-mon[61345]: pgmap v481: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 760 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:26:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:51 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:26:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:51 vm05 ceph-mon[61345]: osdmap e355: 8 total, 8 up, 8 in 2026-03-09T20:26:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:51 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:51 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:51 vm05 ceph-mon[51870]: pgmap v481: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 760 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:26:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:51 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:26:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:51 vm05 ceph-mon[51870]: osdmap e355: 8 total, 8 up, 8 in 2026-03-09T20:26:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:51 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:51 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:52.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:51 vm09 ceph-mon[54524]: pgmap v481: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 760 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:26:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:51 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:26:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:51 vm09 ceph-mon[54524]: osdmap e355: 8 total, 8 up, 8 in 2026-03-09T20:26:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:51 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:51 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:52.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:52 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]': finished 2026-03-09T20:26:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:52 vm05 ceph-mon[61345]: osdmap e356: 8 total, 8 up, 8 in 2026-03-09T20:26:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:52 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:52 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:52 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]': finished 2026-03-09T20:26:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:52 vm05 ceph-mon[51870]: osdmap e356: 8 total, 8 up, 8 in 2026-03-09T20:26:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:52 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:52 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:53.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:52 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]': finished 2026-03-09T20:26:53.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:52 vm09 ceph-mon[54524]: osdmap e356: 8 total, 8 up, 8 in 2026-03-09T20:26:53.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:53.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:52 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:53.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:52 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-51"}]: dispatch 2026-03-09T20:26:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:53 vm05 ceph-mon[61345]: pgmap v484: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 761 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T20:26:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:53 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T20:26:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:53 vm05 ceph-mon[61345]: osdmap e357: 8 total, 8 up, 8 in 2026-03-09T20:26:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:53 vm05 ceph-mon[51870]: pgmap v484: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 761 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T20:26:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:53 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T20:26:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:53 vm05 ceph-mon[51870]: osdmap e357: 8 total, 8 up, 8 in 2026-03-09T20:26:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:53 vm09 ceph-mon[54524]: pgmap v484: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 761 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T20:26:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:53 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T20:26:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:53 vm09 ceph-mon[54524]: osdmap e357: 8 total, 8 up, 8 in 2026-03-09T20:26:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:54 vm05 ceph-mon[61345]: osdmap e358: 8 total, 8 up, 8 in 2026-03-09T20:26:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:54 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:54 vm05 ceph-mon[51870]: osdmap e358: 8 total, 8 up, 8 in 2026-03-09T20:26:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:54 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:55.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:54 vm09 ceph-mon[54524]: osdmap e358: 8 total, 8 up, 8 in 2026-03-09T20:26:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:54 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:55.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:55 vm09 ceph-mon[54524]: pgmap v487: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T20:26:55.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:55 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:55.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:55 vm09 ceph-mon[54524]: osdmap e359: 8 total, 8 up, 8 in 2026-03-09T20:26:55.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:55.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:55 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:55.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-53"}]: dispatch 2026-03-09T20:26:55.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:55 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-53"}]: dispatch 2026-03-09T20:26:55.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:26:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[61345]: pgmap v487: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[61345]: osdmap e359: 8 total, 8 up, 8 in 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-53"}]: dispatch 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-53"}]: dispatch 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[51870]: pgmap v487: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[51870]: osdmap e359: 8 total, 8 up, 8 in 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-53"}]: dispatch 2026-03-09T20:26:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:55 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-53"}]: dispatch 2026-03-09T20:26:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:56 vm09 ceph-mon[54524]: osdmap e360: 8 total, 8 up, 8 in 2026-03-09T20:26:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:56 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:56 vm05 ceph-mon[61345]: osdmap e360: 8 total, 8 up, 8 in 2026-03-09T20:26:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:56 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:26:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:56 vm05 ceph-mon[51870]: osdmap e360: 8 total, 8 up, 8 in 2026-03-09T20:26:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:56 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:26:58.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:57 vm09 ceph-mon[54524]: pgmap v490: 260 pgs: 260 active+clean; 8.3 MiB data, 775 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T20:26:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:57 vm09 ceph-mon[54524]: osdmap e361: 8 total, 8 up, 8 in 2026-03-09T20:26:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:57 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:58.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:57 vm05 ceph-mon[61345]: pgmap v490: 260 pgs: 260 active+clean; 8.3 MiB data, 775 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T20:26:58.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:57 vm05 ceph-mon[61345]: osdmap e361: 8 total, 8 up, 8 in 2026-03-09T20:26:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:57 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:57 vm05 ceph-mon[51870]: pgmap v490: 260 pgs: 260 active+clean; 8.3 MiB data, 775 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-09T20:26:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:57 vm05 ceph-mon[51870]: osdmap e361: 8 total, 8 up, 8 in 2026-03-09T20:26:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:57 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[61345]: osdmap e362: 8 total, 8 up, 8 in 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-55"}]: dispatch 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-55"}]: dispatch 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[51870]: osdmap e362: 8 total, 8 up, 8 in 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-55"}]: dispatch 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:58 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-55"}]: dispatch 2026-03-09T20:26:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:26:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:26:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:26:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:58 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:26:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:58 vm09 ceph-mon[54524]: osdmap e362: 8 total, 8 up, 8 in 2026-03-09T20:26:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:26:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:58 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:26:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-55"}]: dispatch 2026-03-09T20:26:59.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:58 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-55"}]: dispatch 2026-03-09T20:27:00.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:59 vm09 ceph-mon[54524]: pgmap v493: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 775 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:27:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:26:59 vm09 ceph-mon[54524]: osdmap e363: 8 total, 8 up, 8 in 2026-03-09T20:27:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:59 vm05 ceph-mon[61345]: pgmap v493: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 775 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:27:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:26:59 vm05 ceph-mon[61345]: osdmap e363: 8 total, 8 up, 8 in 2026-03-09T20:27:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:59 vm05 ceph-mon[51870]: pgmap v493: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 775 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:27:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:26:59 vm05 ceph-mon[51870]: osdmap e363: 8 total, 8 up, 8 in 2026-03-09T20:27:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:00 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:27:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:00 vm05 ceph-mon[61345]: osdmap e364: 8 total, 8 up, 8 in 2026-03-09T20:27:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:00 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:00 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:27:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:00 vm05 ceph-mon[51870]: osdmap e364: 8 total, 8 up, 8 in 2026-03-09T20:27:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:00 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:01.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:00 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:27:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:00 vm09 ceph-mon[54524]: osdmap e364: 8 total, 8 up, 8 in 2026-03-09T20:27:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:00 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:01 vm09 ceph-mon[54524]: pgmap v496: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 775 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:01 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:01 vm09 ceph-mon[54524]: osdmap e365: 8 total, 8 up, 8 in 2026-03-09T20:27:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:01 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-57"}]: dispatch 2026-03-09T20:27:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:01 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-57"}]: dispatch 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[61345]: pgmap v496: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 775 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[61345]: osdmap e365: 8 total, 8 up, 8 in 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-57"}]: dispatch 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-57"}]: dispatch 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[51870]: pgmap v496: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 775 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[51870]: osdmap e365: 8 total, 8 up, 8 in 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-57"}]: dispatch 2026-03-09T20:27:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:01 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-57"}]: dispatch 2026-03-09T20:27:03.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:02 vm09 ceph-mon[54524]: osdmap e366: 8 total, 8 up, 8 in 2026-03-09T20:27:03.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:02 vm09 ceph-mon[54524]: pgmap v499: 260 pgs: 260 active+clean; 8.3 MiB data, 794 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T20:27:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:02 vm05 ceph-mon[61345]: osdmap e366: 8 total, 8 up, 8 in 2026-03-09T20:27:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:02 vm05 ceph-mon[61345]: pgmap v499: 260 pgs: 260 active+clean; 8.3 MiB data, 794 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T20:27:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:02 vm05 ceph-mon[51870]: osdmap e366: 8 total, 8 up, 8 in 2026-03-09T20:27:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:02 vm05 ceph-mon[51870]: pgmap v499: 260 pgs: 260 active+clean; 8.3 MiB data, 794 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T20:27:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:03 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:27:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:03 vm09 ceph-mon[54524]: osdmap e367: 8 total, 8 up, 8 in 2026-03-09T20:27:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:03 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:03 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:27:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:03 vm05 ceph-mon[61345]: osdmap e367: 8 total, 8 up, 8 in 2026-03-09T20:27:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:03 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:03 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:27:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:03 vm05 ceph-mon[51870]: osdmap e367: 8 total, 8 up, 8 in 2026-03-09T20:27:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:03 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:04 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:04 vm09 ceph-mon[54524]: osdmap e368: 8 total, 8 up, 8 in 2026-03-09T20:27:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:27:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:04 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-59"}]: dispatch 2026-03-09T20:27:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:04 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-59"}]: dispatch 2026-03-09T20:27:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:04 vm09 ceph-mon[54524]: pgmap v502: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 794 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[61345]: osdmap e368: 8 total, 8 up, 8 in 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-59"}]: dispatch 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-59"}]: dispatch 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[61345]: pgmap v502: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 794 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[51870]: osdmap e368: 8 total, 8 up, 8 in 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-59"}]: dispatch 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-59"}]: dispatch 2026-03-09T20:27:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:04 vm05 ceph-mon[51870]: pgmap v502: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 794 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-09T20:27:05.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:27:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:27:06.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:05 vm09 ceph-mon[54524]: osdmap e369: 8 total, 8 up, 8 in 2026-03-09T20:27:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:05 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:05 vm05 ceph-mon[61345]: osdmap e369: 8 total, 8 up, 8 in 2026-03-09T20:27:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:05 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:05 vm05 ceph-mon[51870]: osdmap e369: 8 total, 8 up, 8 in 2026-03-09T20:27:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:05 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:07.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:06 vm09 ceph-mon[54524]: osdmap e370: 8 total, 8 up, 8 in 2026-03-09T20:27:07.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:07.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:06 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:07.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:06 vm09 ceph-mon[54524]: pgmap v505: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 833 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T20:27:07.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:06 vm05 ceph-mon[61345]: osdmap e370: 8 total, 8 up, 8 in 2026-03-09T20:27:07.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:07.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:06 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:07.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:06 vm05 ceph-mon[61345]: pgmap v505: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 833 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T20:27:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:06 vm05 ceph-mon[51870]: osdmap e370: 8 total, 8 up, 8 in 2026-03-09T20:27:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:06 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:06 vm05 ceph-mon[51870]: pgmap v505: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 833 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T20:27:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:07 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:27:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:07 vm09 ceph-mon[54524]: osdmap e371: 8 total, 8 up, 8 in 2026-03-09T20:27:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:07 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:08.311 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:07 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:08.311 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:27:08.311 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:08.311 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:07 vm05 ceph-mon[61345]: osdmap e371: 8 total, 8 up, 8 in 2026-03-09T20:27:08.311 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:07 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:08.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:07 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:08.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:27:08.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:08.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:07 vm05 ceph-mon[51870]: osdmap e371: 8 total, 8 up, 8 in 2026-03-09T20:27:08.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:07 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:27:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:27:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:27:09.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:27:09.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:08 vm09 ceph-mon[54524]: osdmap e372: 8 total, 8 up, 8 in 2026-03-09T20:27:09.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:09.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:09.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-61"}]: dispatch 2026-03-09T20:27:09.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-61"}]: dispatch 2026-03-09T20:27:09.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:08 vm09 ceph-mon[54524]: pgmap v508: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 833 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:09 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:09 vm05 ceph-mon[61345]: osdmap e372: 8 total, 8 up, 8 in 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:09 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-61"}]: dispatch 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:09 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-61"}]: dispatch 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:09 vm05 ceph-mon[61345]: pgmap v508: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 833 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:08 vm05 ceph-mon[51870]: osdmap e372: 8 total, 8 up, 8 in 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-61"}]: dispatch 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-61"}]: dispatch 2026-03-09T20:27:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:08 vm05 ceph-mon[51870]: pgmap v508: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 833 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T20:27:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:10 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:27:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:10 vm09 ceph-mon[54524]: osdmap e373: 8 total, 8 up, 8 in 2026-03-09T20:27:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:10 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:27:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:10 vm05 ceph-mon[61345]: osdmap e373: 8 total, 8 up, 8 in 2026-03-09T20:27:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:10 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:27:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:10 vm05 ceph-mon[51870]: osdmap e373: 8 total, 8 up, 8 in 2026-03-09T20:27:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:11 vm05 ceph-mon[61345]: osdmap e374: 8 total, 8 up, 8 in 2026-03-09T20:27:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:11 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:11 vm05 ceph-mon[61345]: pgmap v511: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 833 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:11 vm05 ceph-mon[51870]: osdmap e374: 8 total, 8 up, 8 in 2026-03-09T20:27:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:11 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:11 vm05 ceph-mon[51870]: pgmap v511: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 833 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:11.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:11 vm09 ceph-mon[54524]: osdmap e374: 8 total, 8 up, 8 in 2026-03-09T20:27:11.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:11.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:11 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:11.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:11 vm09 ceph-mon[54524]: pgmap v511: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 833 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:12.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:12 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:12.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:12 vm05 ceph-mon[61345]: osdmap e375: 8 total, 8 up, 8 in 2026-03-09T20:27:12.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:27:12.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:12.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:12 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:12 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:12 vm05 ceph-mon[51870]: osdmap e375: 8 total, 8 up, 8 in 2026-03-09T20:27:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:27:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:12.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:12 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:12.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:12 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:12.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:12 vm09 ceph-mon[54524]: osdmap e375: 8 total, 8 up, 8 in 2026-03-09T20:27:12.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:27:12.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:12.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:12 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:13.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:13 vm05 ceph-mon[61345]: pgmap v513: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 834 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 478 B/s wr, 1 op/s 2026-03-09T20:27:13.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:13 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:27:13.912 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:13 vm05 ceph-mon[61345]: osdmap e376: 8 total, 8 up, 8 in 2026-03-09T20:27:13.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:13 vm05 ceph-mon[51870]: pgmap v513: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 834 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 478 B/s wr, 1 op/s 2026-03-09T20:27:13.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:13 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:27:13.912 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:13 vm05 ceph-mon[51870]: osdmap e376: 8 total, 8 up, 8 in 2026-03-09T20:27:14.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:13 vm09 ceph-mon[54524]: pgmap v513: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 834 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 478 B/s wr, 1 op/s 2026-03-09T20:27:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:13 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:27:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:13 vm09 ceph-mon[54524]: osdmap e376: 8 total, 8 up, 8 in 2026-03-09T20:27:15.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:14 vm05 ceph-mon[61345]: osdmap e377: 8 total, 8 up, 8 in 2026-03-09T20:27:15.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:14 vm05 ceph-mon[51870]: osdmap e377: 8 total, 8 up, 8 in 2026-03-09T20:27:15.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:14 vm09 ceph-mon[54524]: osdmap e377: 8 total, 8 up, 8 in 2026-03-09T20:27:15.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:27:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:27:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:15 vm05 ceph-mon[61345]: pgmap v516: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 834 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 483 B/s wr, 1 op/s 2026-03-09T20:27:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:15 vm05 ceph-mon[61345]: osdmap e378: 8 total, 8 up, 8 in 2026-03-09T20:27:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:15 vm05 ceph-mon[51870]: pgmap v516: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 834 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 483 B/s wr, 1 op/s 2026-03-09T20:27:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:15 vm05 ceph-mon[51870]: osdmap e378: 8 total, 8 up, 8 in 2026-03-09T20:27:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:15 vm09 ceph-mon[54524]: pgmap v516: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 834 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 483 B/s wr, 1 op/s 2026-03-09T20:27:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:15 vm09 ceph-mon[54524]: osdmap e378: 8 total, 8 up, 8 in 2026-03-09T20:27:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:16 vm05 ceph-mon[61345]: osdmap e379: 8 total, 8 up, 8 in 2026-03-09T20:27:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:16 vm05 ceph-mon[51870]: osdmap e379: 8 total, 8 up, 8 in 2026-03-09T20:27:17.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:17.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:16 vm09 ceph-mon[54524]: osdmap e379: 8 total, 8 up, 8 in 2026-03-09T20:27:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:17 vm05 ceph-mon[61345]: pgmap v519: 292 pgs: 292 active+clean; 8.3 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-09T20:27:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:17 vm05 ceph-mon[61345]: osdmap e380: 8 total, 8 up, 8 in 2026-03-09T20:27:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:17 vm05 ceph-mon[51870]: pgmap v519: 292 pgs: 292 active+clean; 8.3 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-09T20:27:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:17 vm05 ceph-mon[51870]: osdmap e380: 8 total, 8 up, 8 in 2026-03-09T20:27:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:17 vm09 ceph-mon[54524]: pgmap v519: 292 pgs: 292 active+clean; 8.3 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-09T20:27:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:17 vm09 ceph-mon[54524]: osdmap e380: 8 total, 8 up, 8 in 2026-03-09T20:27:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:27:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:27:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:27:19.957 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:19 vm09 ceph-mon[54524]: pgmap v521: 292 pgs: 292 active+clean; 8.3 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 2.0 KiB/s wr, 5 op/s 2026-03-09T20:27:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:19 vm05 ceph-mon[61345]: pgmap v521: 292 pgs: 292 active+clean; 8.3 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 2.0 KiB/s wr, 5 op/s 2026-03-09T20:27:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:19 vm05 ceph-mon[51870]: pgmap v521: 292 pgs: 292 active+clean; 8.3 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 2.0 KiB/s wr, 5 op/s 2026-03-09T20:27:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:21 vm09 ceph-mon[54524]: pgmap v522: 292 pgs: 292 active+clean; 8.3 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T20:27:22.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:21 vm05 ceph-mon[61345]: pgmap v522: 292 pgs: 292 active+clean; 8.3 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T20:27:22.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:21 vm05 ceph-mon[51870]: pgmap v522: 292 pgs: 292 active+clean; 8.3 MiB data, 856 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T20:27:23.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:22 vm09 ceph-mon[54524]: pgmap v523: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T20:27:23.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:22 vm05 ceph-mon[61345]: pgmap v523: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T20:27:23.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:22 vm05 ceph-mon[51870]: pgmap v523: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-09T20:27:25.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:25 vm05 ceph-mon[61345]: pgmap v524: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 241 B/s wr, 1 op/s 2026-03-09T20:27:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:25 vm05 ceph-mon[51870]: pgmap v524: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 241 B/s wr, 1 op/s 2026-03-09T20:27:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:25 vm09 ceph-mon[54524]: pgmap v524: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 241 B/s wr, 1 op/s 2026-03-09T20:27:25.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:27:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:27:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:27.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:27 vm05 ceph-mon[61345]: pgmap v525: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-09T20:27:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:27 vm05 ceph-mon[51870]: pgmap v525: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-09T20:27:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:27 vm09 ceph-mon[54524]: pgmap v525: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-09T20:27:28.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:28 vm05 ceph-mon[61345]: osdmap e381: 8 total, 8 up, 8 in 2026-03-09T20:27:28.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:28 vm05 ceph-mon[51870]: osdmap e381: 8 total, 8 up, 8 in 2026-03-09T20:27:28.660 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:27:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:27:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:27:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:28 vm09 ceph-mon[54524]: osdmap e381: 8 total, 8 up, 8 in 2026-03-09T20:27:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:29 vm05 ceph-mon[61345]: pgmap v527: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-09T20:27:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:29 vm05 ceph-mon[51870]: pgmap v527: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-09T20:27:29.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:29 vm09 ceph-mon[54524]: pgmap v527: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-09T20:27:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:31 vm09 ceph-mon[54524]: pgmap v528: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-09T20:27:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:31 vm09 ceph-mon[54524]: osdmap e382: 8 total, 8 up, 8 in 2026-03-09T20:27:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:31 vm05 ceph-mon[61345]: pgmap v528: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-09T20:27:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:31 vm05 ceph-mon[61345]: osdmap e382: 8 total, 8 up, 8 in 2026-03-09T20:27:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:31 vm05 ceph-mon[51870]: pgmap v528: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s 2026-03-09T20:27:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:31 vm05 ceph-mon[51870]: osdmap e382: 8 total, 8 up, 8 in 2026-03-09T20:27:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:33 vm09 ceph-mon[54524]: pgmap v530: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:27:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:33 vm05 ceph-mon[61345]: pgmap v530: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:27:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:33 vm05 ceph-mon[51870]: pgmap v530: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:27:35.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:35 vm09 ceph-mon[54524]: pgmap v531: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:27:35.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:27:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:27:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:35 vm05 ceph-mon[51870]: pgmap v531: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:27:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:35 vm05 ceph-mon[61345]: pgmap v531: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:27:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:37 vm09 ceph-mon[54524]: pgmap v532: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:27:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:37 vm05 ceph-mon[61345]: pgmap v532: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:27:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:37 vm05 ceph-mon[51870]: pgmap v532: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:27:38.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:38 vm09 ceph-mon[54524]: osdmap e383: 8 total, 8 up, 8 in 2026-03-09T20:27:38.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:38 vm05 ceph-mon[61345]: osdmap e383: 8 total, 8 up, 8 in 2026-03-09T20:27:38.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:38 vm05 ceph-mon[51870]: osdmap e383: 8 total, 8 up, 8 in 2026-03-09T20:27:38.811 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:27:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:27:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:27:39.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:39 vm09 ceph-mon[54524]: pgmap v534: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:27:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:39 vm05 ceph-mon[61345]: pgmap v534: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:27:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:39 vm05 ceph-mon[51870]: pgmap v534: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:27:41.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:41 vm09 ceph-mon[54524]: pgmap v535: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 554 B/s rd, 0 op/s 2026-03-09T20:27:41.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:41 vm09 ceph-mon[54524]: osdmap e384: 8 total, 8 up, 8 in 2026-03-09T20:27:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:41 vm05 ceph-mon[61345]: pgmap v535: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 554 B/s rd, 0 op/s 2026-03-09T20:27:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:41 vm05 ceph-mon[61345]: osdmap e384: 8 total, 8 up, 8 in 2026-03-09T20:27:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:41 vm05 ceph-mon[51870]: pgmap v535: 292 pgs: 292 active+clean; 8.3 MiB data, 857 MiB used, 159 GiB / 160 GiB avail; 554 B/s rd, 0 op/s 2026-03-09T20:27:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:41 vm05 ceph-mon[51870]: osdmap e384: 8 total, 8 up, 8 in 2026-03-09T20:27:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:43 vm09 ceph-mon[54524]: pgmap v537: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T20:27:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:43 vm05 ceph-mon[61345]: pgmap v537: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T20:27:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:43 vm05 ceph-mon[51870]: pgmap v537: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T20:27:46.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:27:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:27:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:45 vm09 ceph-mon[54524]: pgmap v538: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:27:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:45 vm05 ceph-mon[61345]: pgmap v538: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:27:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:45 vm05 ceph-mon[51870]: pgmap v538: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:27:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:47.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:47 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:47.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:47 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:48 vm05 ceph-mon[61345]: pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:27:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:48 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:48 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:48 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-63"}]: dispatch 2026-03-09T20:27:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:48 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-63"}]: dispatch 2026-03-09T20:27:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:48 vm05 ceph-mon[51870]: pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:27:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:48 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:48 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:48 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-63"}]: dispatch 2026-03-09T20:27:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:48 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-63"}]: dispatch 2026-03-09T20:27:48.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:48 vm09 ceph-mon[54524]: pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:27:48.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:48 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:48.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:48 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:27:48.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:48 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-63"}]: dispatch 2026-03-09T20:27:48.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:48 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-63"}]: dispatch 2026-03-09T20:27:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:27:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:27:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:27:49.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:49 vm05 ceph-mon[61345]: osdmap e385: 8 total, 8 up, 8 in 2026-03-09T20:27:49.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:49 vm05 ceph-mon[61345]: pgmap v541: 260 pgs: 260 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:49.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:49 vm05 ceph-mon[61345]: osdmap e386: 8 total, 8 up, 8 in 2026-03-09T20:27:49.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:49.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:49 vm05 ceph-mon[51870]: osdmap e385: 8 total, 8 up, 8 in 2026-03-09T20:27:49.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:49 vm05 ceph-mon[51870]: pgmap v541: 260 pgs: 260 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:49.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:49 vm05 ceph-mon[51870]: osdmap e386: 8 total, 8 up, 8 in 2026-03-09T20:27:49.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:49.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:49 vm09 ceph-mon[54524]: osdmap e385: 8 total, 8 up, 8 in 2026-03-09T20:27:49.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:49 vm09 ceph-mon[54524]: pgmap v541: 260 pgs: 260 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:49.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:49 vm09 ceph-mon[54524]: osdmap e386: 8 total, 8 up, 8 in 2026-03-09T20:27:49.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:50.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:50 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:50.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:50 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:50.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:50 vm05 ceph-mon[61345]: osdmap e387: 8 total, 8 up, 8 in 2026-03-09T20:27:50.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:27:50.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:50.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:50 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:50.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:50 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:50.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:50 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:50.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:50 vm05 ceph-mon[51870]: osdmap e387: 8 total, 8 up, 8 in 2026-03-09T20:27:50.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:27:50.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:50.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:50 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:50.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:50 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:27:50.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:50 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:27:50.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:50 vm09 ceph-mon[54524]: osdmap e387: 8 total, 8 up, 8 in 2026-03-09T20:27:50.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:27:50.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:50.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:50 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:27:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:51 vm09 ceph-mon[54524]: pgmap v544: 292 pgs: 29 unknown, 263 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:51 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:27:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:51 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:27:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:51 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:27:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:51 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:27:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:51 vm05 ceph-mon[61345]: pgmap v544: 292 pgs: 29 unknown, 263 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:51 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:27:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:51 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:27:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:51 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:27:51.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:51 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:27:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:51 vm05 ceph-mon[51870]: pgmap v544: 292 pgs: 29 unknown, 263 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:51 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:27:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:51 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:27:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:51 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:27:51.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:51 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:27:52.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:52 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:27:52.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:52 vm09 ceph-mon[54524]: osdmap e388: 8 total, 8 up, 8 in 2026-03-09T20:27:52.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:52 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:27:52.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:52 vm05 ceph-mon[61345]: osdmap e388: 8 total, 8 up, 8 in 2026-03-09T20:27:52.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:52 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:27:52.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:52 vm05 ceph-mon[51870]: osdmap e388: 8 total, 8 up, 8 in 2026-03-09T20:27:53.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:53 vm09 ceph-mon[54524]: osdmap e389: 8 total, 8 up, 8 in 2026-03-09T20:27:53.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:53 vm09 ceph-mon[54524]: pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:27:53.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:53 vm05 ceph-mon[61345]: osdmap e389: 8 total, 8 up, 8 in 2026-03-09T20:27:53.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:53 vm05 ceph-mon[61345]: pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:27:53.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:53 vm05 ceph-mon[51870]: osdmap e389: 8 total, 8 up, 8 in 2026-03-09T20:27:53.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:53 vm05 ceph-mon[51870]: pgmap v547: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:27:54.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:54 vm09 ceph-mon[54524]: osdmap e390: 8 total, 8 up, 8 in 2026-03-09T20:27:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:54 vm05 ceph-mon[61345]: osdmap e390: 8 total, 8 up, 8 in 2026-03-09T20:27:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:54 vm05 ceph-mon[51870]: osdmap e390: 8 total, 8 up, 8 in 2026-03-09T20:27:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:55 vm09 ceph-mon[54524]: osdmap e391: 8 total, 8 up, 8 in 2026-03-09T20:27:55.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:55 vm09 ceph-mon[54524]: pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:55.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:55 vm05 ceph-mon[61345]: osdmap e391: 8 total, 8 up, 8 in 2026-03-09T20:27:55.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:55 vm05 ceph-mon[61345]: pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:55 vm05 ceph-mon[51870]: osdmap e391: 8 total, 8 up, 8 in 2026-03-09T20:27:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:55 vm05 ceph-mon[51870]: pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:27:56.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:27:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:27:56.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:56.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:56 vm09 ceph-mon[54524]: osdmap e392: 8 total, 8 up, 8 in 2026-03-09T20:27:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:56 vm05 ceph-mon[61345]: osdmap e392: 8 total, 8 up, 8 in 2026-03-09T20:27:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:27:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:56 vm05 ceph-mon[51870]: osdmap e392: 8 total, 8 up, 8 in 2026-03-09T20:27:57.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:57 vm09 ceph-mon[54524]: pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 3.6 KiB/s rd, 2.4 KiB/s wr, 9 op/s 2026-03-09T20:27:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:57 vm05 ceph-mon[61345]: pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 3.6 KiB/s rd, 2.4 KiB/s wr, 9 op/s 2026-03-09T20:27:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:57 vm05 ceph-mon[51870]: pgmap v552: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 3.6 KiB/s rd, 2.4 KiB/s wr, 9 op/s 2026-03-09T20:27:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:27:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:27:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:27:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:27:59 vm05 ceph-mon[61345]: pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-09T20:27:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:27:59 vm05 ceph-mon[51870]: pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-09T20:27:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:27:59 vm09 ceph-mon[54524]: pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-09T20:28:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:01 vm05 ceph-mon[61345]: pgmap v554: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 2.8 KiB/s rd, 1.4 KiB/s wr, 6 op/s 2026-03-09T20:28:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:01 vm05 ceph-mon[61345]: osdmap e393: 8 total, 8 up, 8 in 2026-03-09T20:28:01.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:01 vm05 ceph-mon[51870]: pgmap v554: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 2.8 KiB/s rd, 1.4 KiB/s wr, 6 op/s 2026-03-09T20:28:01.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:01 vm05 ceph-mon[51870]: osdmap e393: 8 total, 8 up, 8 in 2026-03-09T20:28:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:01 vm09 ceph-mon[54524]: pgmap v554: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 2.8 KiB/s rd, 1.4 KiB/s wr, 6 op/s 2026-03-09T20:28:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:01 vm09 ceph-mon[54524]: osdmap e393: 8 total, 8 up, 8 in 2026-03-09T20:28:03.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:03 vm05 ceph-mon[61345]: pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-09T20:28:03.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:03 vm05 ceph-mon[51870]: pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-09T20:28:03.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:03 vm09 ceph-mon[54524]: pgmap v556: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-09T20:28:04.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:04.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:04 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:04.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-65"}]: dispatch 2026-03-09T20:28:04.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:04 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-65"}]: dispatch 2026-03-09T20:28:04.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:04.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:04 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:04.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-65"}]: dispatch 2026-03-09T20:28:04.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:04 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-65"}]: dispatch 2026-03-09T20:28:04.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:04.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:04 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:04.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-65"}]: dispatch 2026-03-09T20:28:04.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:04 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-65"}]: dispatch 2026-03-09T20:28:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:05 vm05 ceph-mon[61345]: pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 621 B/s rd, 0 op/s 2026-03-09T20:28:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:05 vm05 ceph-mon[51870]: pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 621 B/s rd, 0 op/s 2026-03-09T20:28:05.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:05 vm09 ceph-mon[54524]: pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 621 B/s rd, 0 op/s 2026-03-09T20:28:05.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:28:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:28:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:06 vm05 ceph-mon[61345]: osdmap e394: 8 total, 8 up, 8 in 2026-03-09T20:28:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:06 vm05 ceph-mon[51870]: osdmap e394: 8 total, 8 up, 8 in 2026-03-09T20:28:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:06.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:06 vm09 ceph-mon[54524]: osdmap e394: 8 total, 8 up, 8 in 2026-03-09T20:28:06.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:07 vm05 ceph-mon[61345]: pgmap v559: 260 pgs: 260 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:07 vm05 ceph-mon[61345]: osdmap e395: 8 total, 8 up, 8 in 2026-03-09T20:28:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:07 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:07 vm05 ceph-mon[51870]: pgmap v559: 260 pgs: 260 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:07 vm05 ceph-mon[51870]: osdmap e395: 8 total, 8 up, 8 in 2026-03-09T20:28:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:07 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:07.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:07 vm09 ceph-mon[54524]: pgmap v559: 260 pgs: 260 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:07.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:07 vm09 ceph-mon[54524]: osdmap e395: 8 total, 8 up, 8 in 2026-03-09T20:28:07.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:07.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:07 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:08.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:08 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:08.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:08 vm05 ceph-mon[61345]: osdmap e396: 8 total, 8 up, 8 in 2026-03-09T20:28:08.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:08.760 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:08.760 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:08 vm05 ceph-mon[51870]: osdmap e396: 8 total, 8 up, 8 in 2026-03-09T20:28:08.760 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:08.760 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:28:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:28:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:28:08.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:08.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:08 vm09 ceph-mon[54524]: osdmap e396: 8 total, 8 up, 8 in 2026-03-09T20:28:08.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:09 vm05 ceph-mon[61345]: pgmap v562: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:09 vm05 ceph-mon[61345]: osdmap e397: 8 total, 8 up, 8 in 2026-03-09T20:28:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:09 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-67"}]: dispatch 2026-03-09T20:28:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:09 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-67"}]: dispatch 2026-03-09T20:28:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:09 vm05 ceph-mon[51870]: pgmap v562: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:09 vm05 ceph-mon[51870]: osdmap e397: 8 total, 8 up, 8 in 2026-03-09T20:28:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:09 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-67"}]: dispatch 2026-03-09T20:28:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:09 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-67"}]: dispatch 2026-03-09T20:28:09.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:09 vm09 ceph-mon[54524]: pgmap v562: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:09.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:09 vm09 ceph-mon[54524]: osdmap e397: 8 total, 8 up, 8 in 2026-03-09T20:28:09.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:09.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:09 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:09.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-67"}]: dispatch 2026-03-09T20:28:09.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:09 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-67"}]: dispatch 2026-03-09T20:28:10.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:10 vm09 ceph-mon[54524]: osdmap e398: 8 total, 8 up, 8 in 2026-03-09T20:28:10.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:10 vm05 ceph-mon[61345]: osdmap e398: 8 total, 8 up, 8 in 2026-03-09T20:28:10.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:10 vm05 ceph-mon[51870]: osdmap e398: 8 total, 8 up, 8 in 2026-03-09T20:28:11.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:11 vm09 ceph-mon[54524]: pgmap v565: 260 pgs: 260 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:28:11.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:11 vm09 ceph-mon[54524]: osdmap e399: 8 total, 8 up, 8 in 2026-03-09T20:28:11.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:11.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:11 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:11.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:11 vm05 ceph-mon[61345]: pgmap v565: 260 pgs: 260 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:28:11.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:11 vm05 ceph-mon[61345]: osdmap e399: 8 total, 8 up, 8 in 2026-03-09T20:28:11.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:11.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:11 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:11.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:11 vm05 ceph-mon[51870]: pgmap v565: 260 pgs: 260 active+clean; 8.3 MiB data, 902 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:28:11.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:11 vm05 ceph-mon[51870]: osdmap e399: 8 total, 8 up, 8 in 2026-03-09T20:28:11.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:11.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:11 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:13.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:12 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:13.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:12 vm09 ceph-mon[54524]: osdmap e400: 8 total, 8 up, 8 in 2026-03-09T20:28:13.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:13.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:13.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:12 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:13.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-69"}]: dispatch 2026-03-09T20:28:13.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:12 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-69"}]: dispatch 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[61345]: osdmap e400: 8 total, 8 up, 8 in 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-69"}]: dispatch 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-69"}]: dispatch 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[51870]: osdmap e400: 8 total, 8 up, 8 in 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-69"}]: dispatch 2026-03-09T20:28:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:12 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-69"}]: dispatch 2026-03-09T20:28:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:13 vm09 ceph-mon[54524]: pgmap v568: 292 pgs: 13 creating+peering, 19 unknown, 260 active+clean; 8.3 MiB data, 907 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:28:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:13 vm09 ceph-mon[54524]: osdmap e401: 8 total, 8 up, 8 in 2026-03-09T20:28:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:13 vm09 ceph-mon[54524]: osdmap e402: 8 total, 8 up, 8 in 2026-03-09T20:28:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:13 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:13 vm05 ceph-mon[61345]: pgmap v568: 292 pgs: 13 creating+peering, 19 unknown, 260 active+clean; 8.3 MiB data, 907 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:28:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:13 vm05 ceph-mon[61345]: osdmap e401: 8 total, 8 up, 8 in 2026-03-09T20:28:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:13 vm05 ceph-mon[61345]: osdmap e402: 8 total, 8 up, 8 in 2026-03-09T20:28:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:13 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:13 vm05 ceph-mon[51870]: pgmap v568: 292 pgs: 13 creating+peering, 19 unknown, 260 active+clean; 8.3 MiB data, 907 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:28:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:13 vm05 ceph-mon[51870]: osdmap e401: 8 total, 8 up, 8 in 2026-03-09T20:28:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:13 vm05 ceph-mon[51870]: osdmap e402: 8 total, 8 up, 8 in 2026-03-09T20:28:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:13 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:15 vm09 ceph-mon[54524]: pgmap v571: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 907 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 0 op/s 2026-03-09T20:28:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:15 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:15 vm09 ceph-mon[54524]: osdmap e403: 8 total, 8 up, 8 in 2026-03-09T20:28:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:28:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:15 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:28:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:15 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:28:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:16.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:28:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[61345]: pgmap v571: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 907 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 0 op/s 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[61345]: osdmap e403: 8 total, 8 up, 8 in 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[51870]: pgmap v571: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 907 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 0 op/s 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[51870]: osdmap e403: 8 total, 8 up, 8 in 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:28:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:28:16.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:28:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:16 vm09 ceph-mon[54524]: osdmap e404: 8 total, 8 up, 8 in 2026-03-09T20:28:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:28:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:16 vm05 ceph-mon[61345]: osdmap e404: 8 total, 8 up, 8 in 2026-03-09T20:28:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:28:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:16 vm05 ceph-mon[51870]: osdmap e404: 8 total, 8 up, 8 in 2026-03-09T20:28:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:17 vm09 ceph-mon[54524]: pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:28:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:17 vm09 ceph-mon[54524]: osdmap e405: 8 total, 8 up, 8 in 2026-03-09T20:28:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:17 vm05 ceph-mon[61345]: pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:28:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:17 vm05 ceph-mon[61345]: osdmap e405: 8 total, 8 up, 8 in 2026-03-09T20:28:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:17 vm05 ceph-mon[51870]: pgmap v574: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:28:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:17 vm05 ceph-mon[51870]: osdmap e405: 8 total, 8 up, 8 in 2026-03-09T20:28:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:28:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:28:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:28:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:19 vm05 ceph-mon[61345]: pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 663 B/s wr, 1 op/s 2026-03-09T20:28:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:19 vm05 ceph-mon[51870]: pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 663 B/s wr, 1 op/s 2026-03-09T20:28:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:19 vm09 ceph-mon[54524]: pgmap v576: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 663 B/s wr, 1 op/s 2026-03-09T20:28:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:21 vm05 ceph-mon[61345]: pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 853 B/s wr, 3 op/s 2026-03-09T20:28:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:21 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:28:22.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:21 vm05 ceph-mon[51870]: pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 853 B/s wr, 3 op/s 2026-03-09T20:28:22.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:21 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:28:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:21 vm09 ceph-mon[54524]: pgmap v577: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 853 B/s wr, 3 op/s 2026-03-09T20:28:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:21 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:28:24.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:23 vm05 ceph-mon[61345]: pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 2.1 KiB/s wr, 5 op/s 2026-03-09T20:28:24.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:23 vm05 ceph-mon[51870]: pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 2.1 KiB/s wr, 5 op/s 2026-03-09T20:28:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:23 vm09 ceph-mon[54524]: pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 2.1 KiB/s wr, 5 op/s 2026-03-09T20:28:26.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:25 vm09 ceph-mon[54524]: pgmap v579: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 712 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T20:28:26.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:28:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:28:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:25 vm05 ceph-mon[61345]: pgmap v579: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 712 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T20:28:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:25 vm05 ceph-mon[51870]: pgmap v579: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 712 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-09T20:28:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:27 vm05 ceph-mon[61345]: pgmap v580: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T20:28:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:27 vm05 ceph-mon[51870]: pgmap v580: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T20:28:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:27 vm09 ceph-mon[54524]: pgmap v580: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-09T20:28:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:28:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:28:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:28:30.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:29 vm05 ceph-mon[61345]: pgmap v581: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 971 B/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T20:28:30.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:29 vm05 ceph-mon[51870]: pgmap v581: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 971 B/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T20:28:30.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:29 vm09 ceph-mon[54524]: pgmap v581: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 971 B/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T20:28:31.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:31.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:31.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:31 vm05 ceph-mon[61345]: pgmap v582: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T20:28:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:31 vm05 ceph-mon[51870]: pgmap v582: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T20:28:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:31 vm09 ceph-mon[54524]: pgmap v582: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T20:28:34.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:33 vm09 ceph-mon[54524]: pgmap v583: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T20:28:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:33 vm05 ceph-mon[61345]: pgmap v583: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T20:28:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:33 vm05 ceph-mon[51870]: pgmap v583: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T20:28:35.598 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:35 vm09 ceph-mon[54524]: pgmap v584: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:28:35.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:35 vm05 ceph-mon[61345]: pgmap v584: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:28:35.670 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:35 vm05 ceph-mon[51870]: pgmap v584: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:28:36.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:28:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:28:36.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:36.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:37 vm09 ceph-mon[54524]: pgmap v585: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:28:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:37 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:37 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:37 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-71"}]: dispatch 2026-03-09T20:28:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:37 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-71"}]: dispatch 2026-03-09T20:28:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:37 vm05 ceph-mon[61345]: pgmap v585: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:28:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:37 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:37 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:37 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-71"}]: dispatch 2026-03-09T20:28:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:37 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-71"}]: dispatch 2026-03-09T20:28:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:37 vm05 ceph-mon[51870]: pgmap v585: 292 pgs: 292 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:28:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:37 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:37 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:37 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-71"}]: dispatch 2026-03-09T20:28:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:37 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-71"}]: dispatch 2026-03-09T20:28:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:38 vm05 ceph-mon[51870]: osdmap e406: 8 total, 8 up, 8 in 2026-03-09T20:28:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:28:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:28:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:28:38.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:38 vm05 ceph-mon[61345]: osdmap e406: 8 total, 8 up, 8 in 2026-03-09T20:28:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:38 vm09 ceph-mon[54524]: osdmap e406: 8 total, 8 up, 8 in 2026-03-09T20:28:40.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:39 vm09 ceph-mon[54524]: pgmap v587: 260 pgs: 260 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-09T20:28:40.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:39 vm09 ceph-mon[54524]: osdmap e407: 8 total, 8 up, 8 in 2026-03-09T20:28:40.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:40.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:39 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:39 vm05 ceph-mon[51870]: pgmap v587: 260 pgs: 260 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-09T20:28:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:39 vm05 ceph-mon[51870]: osdmap e407: 8 total, 8 up, 8 in 2026-03-09T20:28:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:39 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:39 vm05 ceph-mon[61345]: pgmap v587: 260 pgs: 260 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-09T20:28:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:39 vm05 ceph-mon[61345]: osdmap e407: 8 total, 8 up, 8 in 2026-03-09T20:28:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:39 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:41.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:41.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:40 vm09 ceph-mon[54524]: osdmap e408: 8 total, 8 up, 8 in 2026-03-09T20:28:41.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:41.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:40 vm09 ceph-mon[54524]: pgmap v590: 292 pgs: 29 unknown, 263 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:41.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:41.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:40 vm05 ceph-mon[61345]: osdmap e408: 8 total, 8 up, 8 in 2026-03-09T20:28:41.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:41.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:40 vm05 ceph-mon[61345]: pgmap v590: 292 pgs: 29 unknown, 263 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:40 vm05 ceph-mon[51870]: osdmap e408: 8 total, 8 up, 8 in 2026-03-09T20:28:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:41.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:40 vm05 ceph-mon[51870]: pgmap v590: 292 pgs: 29 unknown, 263 active+clean; 8.3 MiB data, 911 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:42.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:41 vm09 ceph-mon[54524]: osdmap e409: 8 total, 8 up, 8 in 2026-03-09T20:28:42.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:41 vm05 ceph-mon[61345]: osdmap e409: 8 total, 8 up, 8 in 2026-03-09T20:28:42.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:41 vm05 ceph-mon[51870]: osdmap e409: 8 total, 8 up, 8 in 2026-03-09T20:28:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:43 vm09 ceph-mon[54524]: osdmap e410: 8 total, 8 up, 8 in 2026-03-09T20:28:43.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:43 vm09 ceph-mon[54524]: pgmap v593: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T20:28:43.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:43 vm05 ceph-mon[61345]: osdmap e410: 8 total, 8 up, 8 in 2026-03-09T20:28:43.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:43 vm05 ceph-mon[61345]: pgmap v593: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T20:28:43.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:43 vm05 ceph-mon[51870]: osdmap e410: 8 total, 8 up, 8 in 2026-03-09T20:28:43.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:43 vm05 ceph-mon[51870]: pgmap v593: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T20:28:45.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:45 vm09 ceph-mon[54524]: pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 920 B/s rd, 920 B/s wr, 1 op/s 2026-03-09T20:28:45.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.c", "id": [6, 4]}]: dispatch 2026-03-09T20:28:45.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.1f", "id": [6, 4]}]: dispatch 2026-03-09T20:28:45.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.c", "id": [6, 4]}]: dispatch 2026-03-09T20:28:45.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.1f", "id": [6, 4]}]: dispatch 2026-03-09T20:28:45.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:45 vm05 ceph-mon[61345]: pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 920 B/s rd, 920 B/s wr, 1 op/s 2026-03-09T20:28:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.c", "id": [6, 4]}]: dispatch 2026-03-09T20:28:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.1f", "id": [6, 4]}]: dispatch 2026-03-09T20:28:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.c", "id": [6, 4]}]: dispatch 2026-03-09T20:28:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.1f", "id": [6, 4]}]: dispatch 2026-03-09T20:28:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:45 vm05 ceph-mon[51870]: pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 912 MiB used, 159 GiB / 160 GiB avail; 920 B/s rd, 920 B/s wr, 1 op/s 2026-03-09T20:28:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.c", "id": [6, 4]}]: dispatch 2026-03-09T20:28:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.1f", "id": [6, 4]}]: dispatch 2026-03-09T20:28:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.c", "id": [6, 4]}]: dispatch 2026-03-09T20:28:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.1f", "id": [6, 4]}]: dispatch 2026-03-09T20:28:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:46.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:28:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:28:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:46 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.c", "id": [6, 4]}]': finished 2026-03-09T20:28:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:46 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.1f", "id": [6, 4]}]': finished 2026-03-09T20:28:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:46 vm05 ceph-mon[61345]: osdmap e411: 8 total, 8 up, 8 in 2026-03-09T20:28:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:46 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.c", "id": [6, 4]}]': finished 2026-03-09T20:28:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:46 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.1f", "id": [6, 4]}]': finished 2026-03-09T20:28:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:46 vm05 ceph-mon[51870]: osdmap e411: 8 total, 8 up, 8 in 2026-03-09T20:28:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:46 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.c", "id": [6, 4]}]': finished 2026-03-09T20:28:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:46 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "294.1f", "id": [6, 4]}]': finished 2026-03-09T20:28:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:46 vm09 ceph-mon[54524]: osdmap e411: 8 total, 8 up, 8 in 2026-03-09T20:28:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:47 vm05 ceph-mon[61345]: pgmap v596: 292 pgs: 1 peering, 291 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T20:28:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:47 vm05 ceph-mon[61345]: osdmap e412: 8 total, 8 up, 8 in 2026-03-09T20:28:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:47 vm05 ceph-mon[51870]: pgmap v596: 292 pgs: 1 peering, 291 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T20:28:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:47 vm05 ceph-mon[51870]: osdmap e412: 8 total, 8 up, 8 in 2026-03-09T20:28:47.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:47 vm09 ceph-mon[54524]: pgmap v596: 292 pgs: 1 peering, 291 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T20:28:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:47 vm09 ceph-mon[54524]: osdmap e412: 8 total, 8 up, 8 in 2026-03-09T20:28:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:28:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:28:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:28:49.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:49 vm05 ceph-mon[61345]: pgmap v598: 292 pgs: 1 peering, 291 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 325 B/s wr, 3 op/s 2026-03-09T20:28:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:49 vm05 ceph-mon[51870]: pgmap v598: 292 pgs: 1 peering, 291 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 325 B/s wr, 3 op/s 2026-03-09T20:28:49.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:49 vm09 ceph-mon[54524]: pgmap v598: 292 pgs: 1 peering, 291 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 325 B/s wr, 3 op/s 2026-03-09T20:28:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:51 vm05 ceph-mon[61345]: pgmap v599: 292 pgs: 1 peering, 291 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-09T20:28:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:51 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:28:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:51 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:51 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:51 vm05 ceph-mon[51870]: pgmap v599: 292 pgs: 1 peering, 291 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-09T20:28:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:51 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:28:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:51 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:51.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:51 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:51 vm09 ceph-mon[54524]: pgmap v599: 292 pgs: 1 peering, 291 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-09T20:28:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:51 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:28:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:51 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:51 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-73"}]: dispatch 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-73"}]: dispatch 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-73"}]: dispatch 2026-03-09T20:28:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:52 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-73"}]: dispatch 2026-03-09T20:28:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:52 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:52 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:52 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:28:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:52 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:28:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:52 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:28:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:52 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-73"}]: dispatch 2026-03-09T20:28:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:52 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-73"}]: dispatch 2026-03-09T20:28:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:53 vm05 ceph-mon[61345]: pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-09T20:28:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:53 vm05 ceph-mon[61345]: osdmap e413: 8 total, 8 up, 8 in 2026-03-09T20:28:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:53 vm05 ceph-mon[51870]: pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-09T20:28:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:53 vm05 ceph-mon[51870]: osdmap e413: 8 total, 8 up, 8 in 2026-03-09T20:28:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:53 vm09 ceph-mon[54524]: pgmap v600: 292 pgs: 292 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-09T20:28:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:53 vm09 ceph-mon[54524]: osdmap e413: 8 total, 8 up, 8 in 2026-03-09T20:28:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:54 vm09 ceph-mon[54524]: osdmap e414: 8 total, 8 up, 8 in 2026-03-09T20:28:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:54 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:54 vm09 ceph-mon[54524]: pgmap v603: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:54 vm05 ceph-mon[61345]: osdmap e414: 8 total, 8 up, 8 in 2026-03-09T20:28:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:54 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:55.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:54 vm05 ceph-mon[61345]: pgmap v603: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:55.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:54 vm05 ceph-mon[51870]: osdmap e414: 8 total, 8 up, 8 in 2026-03-09T20:28:55.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:55.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:54 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:28:55.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:54 vm05 ceph-mon[51870]: pgmap v603: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 934 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:55.963 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:28:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:28:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:55 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:28:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:55 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:55 vm09 ceph-mon[54524]: osdmap e415: 8 total, 8 up, 8 in 2026-03-09T20:28:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:55 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:55 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:28:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:55 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:55 vm05 ceph-mon[61345]: osdmap e415: 8 total, 8 up, 8 in 2026-03-09T20:28:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:55 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:55 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:28:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:55 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:28:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:55 vm05 ceph-mon[51870]: osdmap e415: 8 total, 8 up, 8 in 2026-03-09T20:28:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:28:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:55 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:28:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:56 vm09 ceph-mon[54524]: osdmap e416: 8 total, 8 up, 8 in 2026-03-09T20:28:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:56 vm09 ceph-mon[54524]: pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 918 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:56 vm05 ceph-mon[61345]: osdmap e416: 8 total, 8 up, 8 in 2026-03-09T20:28:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:56 vm05 ceph-mon[61345]: pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 918 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:56 vm05 ceph-mon[51870]: osdmap e416: 8 total, 8 up, 8 in 2026-03-09T20:28:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:56 vm05 ceph-mon[51870]: pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 918 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:28:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:57 vm09 ceph-mon[54524]: osdmap e417: 8 total, 8 up, 8 in 2026-03-09T20:28:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:57 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-75"}]: dispatch 2026-03-09T20:28:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:57 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-75"}]: dispatch 2026-03-09T20:28:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:57 vm05 ceph-mon[61345]: osdmap e417: 8 total, 8 up, 8 in 2026-03-09T20:28:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:57 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-75"}]: dispatch 2026-03-09T20:28:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:57 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-75"}]: dispatch 2026-03-09T20:28:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:57 vm05 ceph-mon[51870]: osdmap e417: 8 total, 8 up, 8 in 2026-03-09T20:28:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:57 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:28:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-75"}]: dispatch 2026-03-09T20:28:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:57 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-75"}]: dispatch 2026-03-09T20:28:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:28:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:28:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:28:59.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:59 vm09 ceph-mon[54524]: osdmap e418: 8 total, 8 up, 8 in 2026-03-09T20:28:59.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:28:59 vm09 ceph-mon[54524]: pgmap v609: 260 pgs: 260 active+clean; 8.3 MiB data, 918 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:28:59.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:59 vm05 ceph-mon[61345]: osdmap e418: 8 total, 8 up, 8 in 2026-03-09T20:28:59.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:28:59 vm05 ceph-mon[61345]: pgmap v609: 260 pgs: 260 active+clean; 8.3 MiB data, 918 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:28:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:59 vm05 ceph-mon[51870]: osdmap e418: 8 total, 8 up, 8 in 2026-03-09T20:28:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:28:59 vm05 ceph-mon[51870]: pgmap v609: 260 pgs: 260 active+clean; 8.3 MiB data, 918 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:29:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:00 vm05 ceph-mon[61345]: osdmap e419: 8 total, 8 up, 8 in 2026-03-09T20:29:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:00 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:29:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:00 vm05 ceph-mon[51870]: osdmap e419: 8 total, 8 up, 8 in 2026-03-09T20:29:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:00 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:29:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:00 vm09 ceph-mon[54524]: osdmap e419: 8 total, 8 up, 8 in 2026-03-09T20:29:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:00 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:29:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:01 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:01 vm05 ceph-mon[61345]: osdmap e420: 8 total, 8 up, 8 in 2026-03-09T20:29:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:01 vm05 ceph-mon[61345]: pgmap v612: 292 pgs: 27 unknown, 265 active+clean; 8.3 MiB data, 918 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:01 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:29:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:01 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:01 vm05 ceph-mon[51870]: osdmap e420: 8 total, 8 up, 8 in 2026-03-09T20:29:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:01 vm05 ceph-mon[51870]: pgmap v612: 292 pgs: 27 unknown, 265 active+clean; 8.3 MiB data, 918 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:01 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:29:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:01 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:01 vm09 ceph-mon[54524]: osdmap e420: 8 total, 8 up, 8 in 2026-03-09T20:29:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:01 vm09 ceph-mon[54524]: pgmap v612: 292 pgs: 27 unknown, 265 active+clean; 8.3 MiB data, 918 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:01 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:29:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:02 vm05 ceph-mon[61345]: osdmap e421: 8 total, 8 up, 8 in 2026-03-09T20:29:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:02 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-77"}]: dispatch 2026-03-09T20:29:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:02 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-77"}]: dispatch 2026-03-09T20:29:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:02 vm05 ceph-mon[51870]: osdmap e421: 8 total, 8 up, 8 in 2026-03-09T20:29:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:02 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-77"}]: dispatch 2026-03-09T20:29:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:02 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-77"}]: dispatch 2026-03-09T20:29:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:02 vm09 ceph-mon[54524]: osdmap e421: 8 total, 8 up, 8 in 2026-03-09T20:29:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:02 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-77"}]: dispatch 2026-03-09T20:29:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:02 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-77"}]: dispatch 2026-03-09T20:29:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:03 vm05 ceph-mon[61345]: osdmap e422: 8 total, 8 up, 8 in 2026-03-09T20:29:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:03 vm05 ceph-mon[61345]: pgmap v615: 260 pgs: 260 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:29:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:03 vm05 ceph-mon[51870]: osdmap e422: 8 total, 8 up, 8 in 2026-03-09T20:29:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:03 vm05 ceph-mon[51870]: pgmap v615: 260 pgs: 260 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:29:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:03 vm09 ceph-mon[54524]: osdmap e422: 8 total, 8 up, 8 in 2026-03-09T20:29:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:03 vm09 ceph-mon[54524]: pgmap v615: 260 pgs: 260 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:29:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:04 vm05 ceph-mon[61345]: osdmap e423: 8 total, 8 up, 8 in 2026-03-09T20:29:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:04 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:04 vm05 ceph-mon[51870]: osdmap e423: 8 total, 8 up, 8 in 2026-03-09T20:29:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:04 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:04 vm09 ceph-mon[54524]: osdmap e423: 8 total, 8 up, 8 in 2026-03-09T20:29:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:04 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:05 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:05 vm05 ceph-mon[61345]: osdmap e424: 8 total, 8 up, 8 in 2026-03-09T20:29:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:05 vm05 ceph-mon[61345]: pgmap v618: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:29:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:05 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:05 vm05 ceph-mon[51870]: osdmap e424: 8 total, 8 up, 8 in 2026-03-09T20:29:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:05 vm05 ceph-mon[51870]: pgmap v618: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:29:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:05 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:05 vm09 ceph-mon[54524]: osdmap e424: 8 total, 8 up, 8 in 2026-03-09T20:29:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:05 vm09 ceph-mon[54524]: pgmap v618: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 903 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:29:06.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:29:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:29:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:06 vm05 ceph-mon[61345]: osdmap e425: 8 total, 8 up, 8 in 2026-03-09T20:29:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:06 vm05 ceph-mon[51870]: osdmap e425: 8 total, 8 up, 8 in 2026-03-09T20:29:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:06 vm09 ceph-mon[54524]: osdmap e425: 8 total, 8 up, 8 in 2026-03-09T20:29:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:07 vm09 ceph-mon[54524]: osdmap e426: 8 total, 8 up, 8 in 2026-03-09T20:29:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.f"}]: dispatch 2026-03-09T20:29:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:07 vm09 ceph-mon[54524]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.f"}]: dispatch 2026-03-09T20:29:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:07 vm09 ceph-mon[54524]: pgmap v621: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T20:29:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:07 vm05 ceph-mon[61345]: osdmap e426: 8 total, 8 up, 8 in 2026-03-09T20:29:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.f"}]: dispatch 2026-03-09T20:29:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:07 vm05 ceph-mon[61345]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.f"}]: dispatch 2026-03-09T20:29:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:07 vm05 ceph-mon[61345]: pgmap v621: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T20:29:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:07 vm05 ceph-mon[51870]: osdmap e426: 8 total, 8 up, 8 in 2026-03-09T20:29:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.f"}]: dispatch 2026-03-09T20:29:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:07 vm05 ceph-mon[51870]: from='mon.? v1:192.168.123.109:0/2580205544' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.f"}]: dispatch 2026-03-09T20:29:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:07 vm05 ceph-mon[51870]: pgmap v621: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T20:29:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:08 vm09 ceph-mon[54524]: 297.f deep-scrub starts 2026-03-09T20:29:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:08 vm09 ceph-mon[54524]: 297.f deep-scrub ok 2026-03-09T20:29:08.561 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:08 vm05 ceph-mon[61345]: 297.f deep-scrub starts 2026-03-09T20:29:08.561 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:08 vm05 ceph-mon[61345]: 297.f deep-scrub ok 2026-03-09T20:29:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:08 vm05 ceph-mon[51870]: 297.f deep-scrub starts 2026-03-09T20:29:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:08 vm05 ceph-mon[51870]: 297.f deep-scrub ok 2026-03-09T20:29:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:29:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:29:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:29:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:09 vm09 ceph-mon[54524]: pgmap v622: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 986 B/s wr, 2 op/s 2026-03-09T20:29:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:09 vm05 ceph-mon[61345]: pgmap v622: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 986 B/s wr, 2 op/s 2026-03-09T20:29:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:09 vm05 ceph-mon[51870]: pgmap v622: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 986 B/s wr, 2 op/s 2026-03-09T20:29:11.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:11 vm05 ceph-mon[61345]: pgmap v623: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 827 B/s wr, 2 op/s 2026-03-09T20:29:11.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:11 vm05 ceph-mon[51870]: pgmap v623: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 827 B/s wr, 2 op/s 2026-03-09T20:29:11.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:11 vm09 ceph-mon[54524]: pgmap v623: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 827 B/s wr, 2 op/s 2026-03-09T20:29:13.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:13 vm05 ceph-mon[61345]: pgmap v624: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:29:13.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:13 vm05 ceph-mon[51870]: pgmap v624: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:29:13.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:13 vm09 ceph-mon[54524]: pgmap v624: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:29:15.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:15 vm09 ceph-mon[54524]: pgmap v625: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 670 B/s wr, 2 op/s 2026-03-09T20:29:15.639 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:29:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:15 vm05 ceph-mon[61345]: pgmap v625: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 670 B/s wr, 2 op/s 2026-03-09T20:29:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:29:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:15 vm05 ceph-mon[51870]: pgmap v625: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 670 B/s wr, 2 op/s 2026-03-09T20:29:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:29:16.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:29:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:29:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:17 vm05 ceph-mon[61345]: pgmap v626: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 100 B/s wr, 1 op/s 2026-03-09T20:29:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:17 vm05 ceph-mon[51870]: pgmap v626: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 100 B/s wr, 1 op/s 2026-03-09T20:29:17.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:17 vm09 ceph-mon[54524]: pgmap v626: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1009 B/s rd, 100 B/s wr, 1 op/s 2026-03-09T20:29:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:29:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:29:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:29:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:19 vm05 ceph-mon[61345]: pgmap v627: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 85 B/s wr, 1 op/s 2026-03-09T20:29:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:19 vm05 ceph-mon[51870]: pgmap v627: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 85 B/s wr, 1 op/s 2026-03-09T20:29:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:19 vm09 ceph-mon[54524]: pgmap v627: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 85 B/s wr, 1 op/s 2026-03-09T20:29:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:21 vm05 ceph-mon[61345]: pgmap v628: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s 2026-03-09T20:29:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:21 vm05 ceph-mon[51870]: pgmap v628: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s 2026-03-09T20:29:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:21 vm09 ceph-mon[54524]: pgmap v628: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s 2026-03-09T20:29:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:23 vm05 ceph-mon[61345]: pgmap v629: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 85 B/s wr, 1 op/s 2026-03-09T20:29:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:23 vm05 ceph-mon[51870]: pgmap v629: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 85 B/s wr, 1 op/s 2026-03-09T20:29:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:23 vm09 ceph-mon[54524]: pgmap v629: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 85 B/s wr, 1 op/s 2026-03-09T20:29:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:25 vm09 ceph-mon[54524]: pgmap v630: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:29:25.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:29:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:29:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:25 vm05 ceph-mon[61345]: pgmap v630: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:29:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:25 vm05 ceph-mon[51870]: pgmap v630: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:29:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:26 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-79"}]: dispatch 2026-03-09T20:29:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:26 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-79"}]: dispatch 2026-03-09T20:29:26.523 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:29:26 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:29:26.187+0000 7fda96ace640 -1 snap_mapper.add_oid found existing snaps mapped on 297:f5edac47:test-rados-api-vm05-94573-80::foo:2, removing 2026-03-09T20:29:26.523 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:29:26 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:29:26.186+0000 7f0c3016d640 -1 snap_mapper.add_oid found existing snaps mapped on 297:f5edac47:test-rados-api-vm05-94573-80::foo:2, removing 2026-03-09T20:29:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:26 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-79"}]: dispatch 2026-03-09T20:29:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:26 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-79"}]: dispatch 2026-03-09T20:29:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:26 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-79"}]: dispatch 2026-03-09T20:29:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:26 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-79"}]: dispatch 2026-03-09T20:29:26.660 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:29:26 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1[70325]: 2026-03-09T20:29:26.187+0000 7f0905ee5640 -1 snap_mapper.add_oid found existing snaps mapped on 297:f5edac47:test-rados-api-vm05-94573-80::foo:2, removing 2026-03-09T20:29:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:27 vm09 ceph-mon[54524]: pgmap v631: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:27 vm09 ceph-mon[54524]: osdmap e427: 8 total, 8 up, 8 in 2026-03-09T20:29:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:27 vm05 ceph-mon[61345]: pgmap v631: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:27 vm05 ceph-mon[61345]: osdmap e427: 8 total, 8 up, 8 in 2026-03-09T20:29:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:27 vm05 ceph-mon[51870]: pgmap v631: 292 pgs: 292 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:27 vm05 ceph-mon[51870]: osdmap e427: 8 total, 8 up, 8 in 2026-03-09T20:29:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:28 vm09 ceph-mon[54524]: osdmap e428: 8 total, 8 up, 8 in 2026-03-09T20:29:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:28 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:28.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:28 vm05 ceph-mon[61345]: osdmap e428: 8 total, 8 up, 8 in 2026-03-09T20:29:28.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:28.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:28 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:28.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:28 vm05 ceph-mon[51870]: osdmap e428: 8 total, 8 up, 8 in 2026-03-09T20:29:28.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:28.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:28 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:29:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:29:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:29:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:29 vm09 ceph-mon[54524]: pgmap v634: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T20:29:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:29 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:29:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:29 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:29 vm09 ceph-mon[54524]: osdmap e429: 8 total, 8 up, 8 in 2026-03-09T20:29:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:29 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:29 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:29:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:29 vm09 ceph-mon[54524]: osdmap e430: 8 total, 8 up, 8 in 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[61345]: pgmap v634: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[61345]: osdmap e429: 8 total, 8 up, 8 in 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[61345]: osdmap e430: 8 total, 8 up, 8 in 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[51870]: pgmap v634: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[51870]: osdmap e429: 8 total, 8 up, 8 in 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:29.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:29:29.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:29 vm05 ceph-mon[51870]: osdmap e430: 8 total, 8 up, 8 in 2026-03-09T20:29:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:30 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:29:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:30 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:29:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:30 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:29:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:31 vm05 ceph-mon[61345]: pgmap v637: 292 pgs: 27 unknown, 265 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:31 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:29:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:31 vm05 ceph-mon[61345]: osdmap e431: 8 total, 8 up, 8 in 2026-03-09T20:29:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:31 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:31 vm05 ceph-mon[51870]: pgmap v637: 292 pgs: 27 unknown, 265 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:31 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:29:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:31 vm05 ceph-mon[51870]: osdmap e431: 8 total, 8 up, 8 in 2026-03-09T20:29:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:31 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:31 vm09 ceph-mon[54524]: pgmap v637: 292 pgs: 27 unknown, 265 active+clean; 8.3 MiB data, 921 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:31 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:29:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:31 vm09 ceph-mon[54524]: osdmap e431: 8 total, 8 up, 8 in 2026-03-09T20:29:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:31 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:32 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:29:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:32 vm05 ceph-mon[61345]: osdmap e432: 8 total, 8 up, 8 in 2026-03-09T20:29:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:32 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:32 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:32 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:29:32.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:32 vm05 ceph-mon[61345]: osdmap e433: 8 total, 8 up, 8 in 2026-03-09T20:29:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:32 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:29:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:32 vm05 ceph-mon[51870]: osdmap e432: 8 total, 8 up, 8 in 2026-03-09T20:29:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:32 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:32.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:32 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:32.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:32 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:29:32.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:32 vm05 ceph-mon[51870]: osdmap e433: 8 total, 8 up, 8 in 2026-03-09T20:29:33.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:32 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:29:33.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:32 vm09 ceph-mon[54524]: osdmap e432: 8 total, 8 up, 8 in 2026-03-09T20:29:33.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:32 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:33.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:32 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:33.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:32 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:29:33.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:32 vm09 ceph-mon[54524]: osdmap e433: 8 total, 8 up, 8 in 2026-03-09T20:29:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:33 vm05 ceph-mon[61345]: pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 922 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:29:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T20:29:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:33 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T20:29:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:33 vm05 ceph-mon[51870]: pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 922 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:29:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T20:29:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:33 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T20:29:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:33 vm09 ceph-mon[54524]: pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 922 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:29:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T20:29:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:33 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-09T20:29:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:34 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T20:29:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:34 vm05 ceph-mon[61345]: osdmap e434: 8 total, 8 up, 8 in 2026-03-09T20:29:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T20:29:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:34 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T20:29:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:34 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T20:29:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:34 vm05 ceph-mon[51870]: osdmap e434: 8 total, 8 up, 8 in 2026-03-09T20:29:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T20:29:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:34 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T20:29:35.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:34 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-09T20:29:35.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:34 vm09 ceph-mon[54524]: osdmap e434: 8 total, 8 up, 8 in 2026-03-09T20:29:35.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T20:29:35.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:34 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-09T20:29:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:35 vm05 ceph-mon[61345]: pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 922 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T20:29:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:35 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T20:29:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:35 vm05 ceph-mon[61345]: osdmap e435: 8 total, 8 up, 8 in 2026-03-09T20:29:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:35 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:35 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:35 vm05 ceph-mon[51870]: pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 922 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T20:29:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:35 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T20:29:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:35 vm05 ceph-mon[51870]: osdmap e435: 8 total, 8 up, 8 in 2026-03-09T20:29:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:35 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:35 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:35 vm09 ceph-mon[54524]: pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 922 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T20:29:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:35 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-09T20:29:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:35 vm09 ceph-mon[54524]: osdmap e435: 8 total, 8 up, 8 in 2026-03-09T20:29:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:35 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:35 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:36.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:29:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[61345]: osdmap e436: 8 total, 8 up, 8 in 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-81"}]: dispatch 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-81"}]: dispatch 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[51870]: osdmap e436: 8 total, 8 up, 8 in 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-81"}]: dispatch 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-81"}]: dispatch 2026-03-09T20:29:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:36 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:29:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:36 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:29:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:36 vm09 ceph-mon[54524]: osdmap e436: 8 total, 8 up, 8 in 2026-03-09T20:29:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:36 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-81"}]: dispatch 2026-03-09T20:29:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:36 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-81"}]: dispatch 2026-03-09T20:29:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:36 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:29:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:37 vm05 ceph-mon[61345]: pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 922 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 11 KiB/s wr, 29 op/s 2026-03-09T20:29:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:37 vm05 ceph-mon[61345]: osdmap e437: 8 total, 8 up, 8 in 2026-03-09T20:29:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:37 vm05 ceph-mon[51870]: pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 922 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 11 KiB/s wr, 29 op/s 2026-03-09T20:29:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:37 vm05 ceph-mon[51870]: osdmap e437: 8 total, 8 up, 8 in 2026-03-09T20:29:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:37 vm09 ceph-mon[54524]: pgmap v646: 292 pgs: 292 active+clean; 8.3 MiB data, 922 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 11 KiB/s wr, 29 op/s 2026-03-09T20:29:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:37 vm09 ceph-mon[54524]: osdmap e437: 8 total, 8 up, 8 in 2026-03-09T20:29:38.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:38 vm05 ceph-mon[61345]: osdmap e438: 8 total, 8 up, 8 in 2026-03-09T20:29:38.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:38.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:38 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:38 vm05 ceph-mon[51870]: osdmap e438: 8 total, 8 up, 8 in 2026-03-09T20:29:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:38.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:38 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:29:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:29:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:29:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:38 vm09 ceph-mon[54524]: osdmap e438: 8 total, 8 up, 8 in 2026-03-09T20:29:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:39.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:38 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:40.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:39 vm09 ceph-mon[54524]: pgmap v649: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 922 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 9.0 KiB/s wr, 28 op/s 2026-03-09T20:29:40.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:39 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:40.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:39 vm09 ceph-mon[54524]: osdmap e439: 8 total, 8 up, 8 in 2026-03-09T20:29:40.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:40.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:40.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:39 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:39 vm05 ceph-mon[61345]: pgmap v649: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 922 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 9.0 KiB/s wr, 28 op/s 2026-03-09T20:29:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:39 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:39 vm05 ceph-mon[61345]: osdmap e439: 8 total, 8 up, 8 in 2026-03-09T20:29:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:39 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:39 vm05 ceph-mon[51870]: pgmap v649: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 922 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 9.0 KiB/s wr, 28 op/s 2026-03-09T20:29:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:39 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:39 vm05 ceph-mon[51870]: osdmap e439: 8 total, 8 up, 8 in 2026-03-09T20:29:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:39 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:41.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:29:41.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:41.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:40 vm09 ceph-mon[54524]: osdmap e440: 8 total, 8 up, 8 in 2026-03-09T20:29:41.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:41.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:29:41.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:41.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:40 vm09 ceph-mon[54524]: osdmap e441: 8 total, 8 up, 8 in 2026-03-09T20:29:41.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[61345]: osdmap e440: 8 total, 8 up, 8 in 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[61345]: osdmap e441: 8 total, 8 up, 8 in 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[51870]: osdmap e440: 8 total, 8 up, 8 in 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[51870]: osdmap e441: 8 total, 8 up, 8 in 2026-03-09T20:29:41.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:41 vm09 ceph-mon[54524]: pgmap v652: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T20:29:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:41 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:29:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:41 vm09 ceph-mon[54524]: osdmap e442: 8 total, 8 up, 8 in 2026-03-09T20:29:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:41 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:41 vm05 ceph-mon[61345]: pgmap v652: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T20:29:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:41 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:29:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:41 vm05 ceph-mon[61345]: osdmap e442: 8 total, 8 up, 8 in 2026-03-09T20:29:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:41 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:41 vm05 ceph-mon[51870]: pgmap v652: 292 pgs: 22 unknown, 270 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T20:29:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:41 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:29:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:41 vm05 ceph-mon[51870]: osdmap e442: 8 total, 8 up, 8 in 2026-03-09T20:29:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:41 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:43 vm09 ceph-mon[54524]: pgmap v655: 292 pgs: 292 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-09T20:29:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:43 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:29:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:43 vm09 ceph-mon[54524]: osdmap e443: 8 total, 8 up, 8 in 2026-03-09T20:29:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:43 vm05 ceph-mon[61345]: pgmap v655: 292 pgs: 292 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-09T20:29:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:43 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:29:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:43 vm05 ceph-mon[61345]: osdmap e443: 8 total, 8 up, 8 in 2026-03-09T20:29:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:43 vm05 ceph-mon[51870]: pgmap v655: 292 pgs: 292 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-09T20:29:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:43 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:29:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:43 vm05 ceph-mon[51870]: osdmap e443: 8 total, 8 up, 8 in 2026-03-09T20:29:45.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:44 vm09 ceph-mon[54524]: osdmap e444: 8 total, 8 up, 8 in 2026-03-09T20:29:45.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:44 vm05 ceph-mon[61345]: osdmap e444: 8 total, 8 up, 8 in 2026-03-09T20:29:45.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:44 vm05 ceph-mon[51870]: osdmap e444: 8 total, 8 up, 8 in 2026-03-09T20:29:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:45 vm09 ceph-mon[54524]: pgmap v658: 292 pgs: 292 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 2.2 KiB/s wr, 6 op/s 2026-03-09T20:29:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:45 vm09 ceph-mon[54524]: osdmap e445: 8 total, 8 up, 8 in 2026-03-09T20:29:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:45 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-83"}]: dispatch 2026-03-09T20:29:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:45 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-83"}]: dispatch 2026-03-09T20:29:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:29:46.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:29:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[61345]: pgmap v658: 292 pgs: 292 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 2.2 KiB/s wr, 6 op/s 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[61345]: osdmap e445: 8 total, 8 up, 8 in 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-83"}]: dispatch 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-83"}]: dispatch 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[51870]: pgmap v658: 292 pgs: 292 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 2.2 KiB/s wr, 6 op/s 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[51870]: osdmap e445: 8 total, 8 up, 8 in 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-83"}]: dispatch 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-83"}]: dispatch 2026-03-09T20:29:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:29:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:46 vm09 ceph-mon[54524]: osdmap e446: 8 total, 8 up, 8 in 2026-03-09T20:29:47.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:47.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:46 vm05 ceph-mon[61345]: osdmap e446: 8 total, 8 up, 8 in 2026-03-09T20:29:47.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:47.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:46 vm05 ceph-mon[51870]: osdmap e446: 8 total, 8 up, 8 in 2026-03-09T20:29:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:47 vm05 ceph-mon[61345]: pgmap v661: 260 pgs: 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T20:29:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:47 vm05 ceph-mon[61345]: osdmap e447: 8 total, 8 up, 8 in 2026-03-09T20:29:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:47 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:47 vm05 ceph-mon[51870]: pgmap v661: 260 pgs: 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T20:29:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:47 vm05 ceph-mon[51870]: osdmap e447: 8 total, 8 up, 8 in 2026-03-09T20:29:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:47 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:47 vm09 ceph-mon[54524]: pgmap v661: 260 pgs: 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T20:29:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:47 vm09 ceph-mon[54524]: osdmap e447: 8 total, 8 up, 8 in 2026-03-09T20:29:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:47 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[61345]: osdmap e448: 8 total, 8 up, 8 in 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[61345]: osdmap e449: 8 total, 8 up, 8 in 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[51870]: osdmap e448: 8 total, 8 up, 8 in 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[51870]: osdmap e449: 8 total, 8 up, 8 in 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:48 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:29:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:29:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:29:49.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:48 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:49.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:48 vm09 ceph-mon[54524]: osdmap e448: 8 total, 8 up, 8 in 2026-03-09T20:29:49.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:48 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:49.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:48 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:49.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:48 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:49.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:48 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:29:49.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:48 vm09 ceph-mon[54524]: osdmap e449: 8 total, 8 up, 8 in 2026-03-09T20:29:49.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:48 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:49.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:48 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:49 vm05 ceph-mon[61345]: pgmap v664: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T20:29:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:49 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:29:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:49 vm05 ceph-mon[61345]: osdmap e450: 8 total, 8 up, 8 in 2026-03-09T20:29:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:49 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:49 vm05 ceph-mon[51870]: pgmap v664: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T20:29:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:49 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:29:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:49 vm05 ceph-mon[51870]: osdmap e450: 8 total, 8 up, 8 in 2026-03-09T20:29:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:49 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:49 vm09 ceph-mon[54524]: pgmap v664: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-09T20:29:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:49 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_tier","val": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:29:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:49 vm09 ceph-mon[54524]: osdmap e450: 8 total, 8 up, 8 in 2026-03-09T20:29:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:49 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:29:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:51 vm05 ceph-mon[51870]: pgmap v667: 292 pgs: 19 unknown, 273 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:51 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:29:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:51 vm05 ceph-mon[51870]: osdmap e451: 8 total, 8 up, 8 in 2026-03-09T20:29:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:51 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:51 vm05 ceph-mon[61345]: pgmap v667: 292 pgs: 19 unknown, 273 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:51 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:29:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:51 vm05 ceph-mon[61345]: osdmap e451: 8 total, 8 up, 8 in 2026-03-09T20:29:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:51 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:51 vm09 ceph-mon[54524]: pgmap v667: 292 pgs: 19 unknown, 273 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:51 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:29:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:51 vm09 ceph-mon[54524]: osdmap e451: 8 total, 8 up, 8 in 2026-03-09T20:29:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:51 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:29:53.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:52 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:29:53.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:52 vm05 ceph-mon[61345]: osdmap e452: 8 total, 8 up, 8 in 2026-03-09T20:29:53.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:52 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:29:53.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:52 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:29:53.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:52 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:29:53.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:52 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:29:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:52 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:29:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:52 vm05 ceph-mon[51870]: osdmap e452: 8 total, 8 up, 8 in 2026-03-09T20:29:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:52 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:29:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:52 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:29:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:52 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:29:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:52 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:29:53.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:52 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:29:53.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:52 vm09 ceph-mon[54524]: osdmap e452: 8 total, 8 up, 8 in 2026-03-09T20:29:53.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:52 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:29:53.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:52 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:29:53.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:52 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:29:53.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:52 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:29:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:53 vm05 ceph-mon[61345]: pgmap v670: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:53 vm05 ceph-mon[61345]: osdmap e453: 8 total, 8 up, 8 in 2026-03-09T20:29:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:53 vm05 ceph-mon[51870]: pgmap v670: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:53 vm05 ceph-mon[51870]: osdmap e453: 8 total, 8 up, 8 in 2026-03-09T20:29:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:53 vm09 ceph-mon[54524]: pgmap v670: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:29:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:53 vm09 ceph-mon[54524]: osdmap e453: 8 total, 8 up, 8 in 2026-03-09T20:29:55.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:54 vm05 ceph-mon[61345]: osdmap e454: 8 total, 8 up, 8 in 2026-03-09T20:29:55.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:55.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:54 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:55.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-85"}]: dispatch 2026-03-09T20:29:55.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:54 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-85"}]: dispatch 2026-03-09T20:29:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:54 vm05 ceph-mon[51870]: osdmap e454: 8 total, 8 up, 8 in 2026-03-09T20:29:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:54 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-85"}]: dispatch 2026-03-09T20:29:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:54 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-85"}]: dispatch 2026-03-09T20:29:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:54 vm09 ceph-mon[54524]: osdmap e454: 8 total, 8 up, 8 in 2026-03-09T20:29:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:54 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:29:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-85"}]: dispatch 2026-03-09T20:29:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:54 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-85"}]: dispatch 2026-03-09T20:29:55.947 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:29:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:29:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:55 vm09 ceph-mon[54524]: pgmap v673: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:29:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:55 vm09 ceph-mon[54524]: osdmap e455: 8 total, 8 up, 8 in 2026-03-09T20:29:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:55 vm05 ceph-mon[61345]: pgmap v673: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:29:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:55 vm05 ceph-mon[61345]: osdmap e455: 8 total, 8 up, 8 in 2026-03-09T20:29:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:55 vm05 ceph-mon[51870]: pgmap v673: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:29:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:55 vm05 ceph-mon[51870]: osdmap e455: 8 total, 8 up, 8 in 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: OK ] LibRadosTwoPoolsPP.ProxyRead (18376 ms) 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.CachePin 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.CachePin (22118 ms) 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.SetRedirectRead 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.SetRedirectRead (3077 ms) 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestPromoteRead 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestPromoteRead (3032 ms) 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRefRead 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRefRead (3101 ms) 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestUnset 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestUnset (3151 ms) 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestDedupRefRead 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestDedupRefRead (4040 ms) 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapRefcount 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapRefcount (38996 ms) 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapRefcount2 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapRefcount2 (17364 ms) 2026-03-09T20:29:56.978 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestTestSnapCreate 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestTestSnapCreate (4120 ms) 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRedirectAfterPromote 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRedirectAfterPromote (3157 ms) 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestCheckRefcountWhenModification 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestCheckRefcountWhenModification (25078 ms) 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapIncCount 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapIncCount (15222 ms) 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvict 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvict (5050 ms) 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvictPromote 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvictPromote (4107 ms) 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapSizeMismatch 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: waiting for scrubs... 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: done waiting 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapSizeMismatch (24346 ms) 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.DedupFlushRead 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.DedupFlushRead (10215 ms) 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestFlushSnap 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestFlushSnap (9067 ms) 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestFlushDupCount 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestFlushDupCount (9170 ms) 2026-03-09T20:29:56.979 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TierFlushDuringFlush 2026-03-09T20:29:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:56 vm09 ceph-mon[54524]: osdmap e456: 8 total, 8 up, 8 in 2026-03-09T20:29:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:56 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:56 vm09 ceph-mon[54524]: pgmap v676: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T20:29:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:56 vm05 ceph-mon[61345]: osdmap e456: 8 total, 8 up, 8 in 2026-03-09T20:29:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:56 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:56 vm05 ceph-mon[61345]: pgmap v676: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T20:29:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:29:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:56 vm05 ceph-mon[51870]: osdmap e456: 8 total, 8 up, 8 in 2026-03-09T20:29:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:56 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:29:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:56 vm05 ceph-mon[51870]: pgmap v676: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T20:29:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:57 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:57 vm09 ceph-mon[54524]: osdmap e457: 8 total, 8 up, 8 in 2026-03-09T20:29:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:57 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:29:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:57 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:57 vm05 ceph-mon[61345]: osdmap e457: 8 total, 8 up, 8 in 2026-03-09T20:29:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:57 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:29:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:57 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:29:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:29:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:57 vm05 ceph-mon[51870]: osdmap e457: 8 total, 8 up, 8 in 2026-03-09T20:29:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:57 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:29:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:29:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:29:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:29:59.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:59 vm05 ceph-mon[61345]: osdmap e458: 8 total, 8 up, 8 in 2026-03-09T20:29:59.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:59.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:59 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:59.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:29:59 vm05 ceph-mon[61345]: pgmap v679: 324 pgs: 64 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T20:29:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:59 vm05 ceph-mon[51870]: osdmap e458: 8 total, 8 up, 8 in 2026-03-09T20:29:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:59 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:29:59 vm05 ceph-mon[51870]: pgmap v679: 324 pgs: 64 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T20:29:59.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:59 vm09 ceph-mon[54524]: osdmap e458: 8 total, 8 up, 8 in 2026-03-09T20:29:59.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:59.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:59 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:29:59.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:29:59 vm09 ceph-mon[54524]: pgmap v679: 324 pgs: 64 unknown, 260 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[61345]: osdmap e459: 8 total, 8 up, 8 in 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_tier","val": "test-rados-api-vm05-94573-89-test-flush"}]: dispatch 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_tier","val": "test-rados-api-vm05-94573-89-test-flush"}]: dispatch 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[61345]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[61345]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[61345]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[61345]: application not enabled on pool 'WatchNotifyvm05-95715-1' 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[61345]: application not enabled on pool 'AssertExistsvm05-95743-1' 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[61345]: application not enabled on pool 'test-rados-api-vm05-94573-89-test-flush' 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[61345]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[51870]: osdmap e459: 8 total, 8 up, 8 in 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_tier","val": "test-rados-api-vm05-94573-89-test-flush"}]: dispatch 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_tier","val": "test-rados-api-vm05-94573-89-test-flush"}]: dispatch 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[51870]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[51870]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[51870]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[51870]: application not enabled on pool 'WatchNotifyvm05-95715-1' 2026-03-09T20:30:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[51870]: application not enabled on pool 'AssertExistsvm05-95743-1' 2026-03-09T20:30:00.411 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[51870]: application not enabled on pool 'test-rados-api-vm05-94573-89-test-flush' 2026-03-09T20:30:00.411 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:00 vm05 ceph-mon[51870]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T20:30:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:00 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:30:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:00 vm09 ceph-mon[54524]: osdmap e459: 8 total, 8 up, 8 in 2026-03-09T20:30:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_tier","val": "test-rados-api-vm05-94573-89-test-flush"}]: dispatch 2026-03-09T20:30:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:00 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_tier","val": "test-rados-api-vm05-94573-89-test-flush"}]: dispatch 2026-03-09T20:30:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:30:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:00 vm09 ceph-mon[54524]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled 2026-03-09T20:30:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:00 vm09 ceph-mon[54524]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-09T20:30:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:00 vm09 ceph-mon[54524]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T20:30:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:00 vm09 ceph-mon[54524]: application not enabled on pool 'WatchNotifyvm05-95715-1' 2026-03-09T20:30:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:00 vm09 ceph-mon[54524]: application not enabled on pool 'AssertExistsvm05-95743-1' 2026-03-09T20:30:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:00 vm09 ceph-mon[54524]: application not enabled on pool 'test-rados-api-vm05-94573-89-test-flush' 2026-03-09T20:30:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:00 vm09 ceph-mon[54524]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T20:30:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:01 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_tier","val": "test-rados-api-vm05-94573-89-test-flush"}]': finished 2026-03-09T20:30:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:01 vm05 ceph-mon[61345]: osdmap e460: 8 total, 8 up, 8 in 2026-03-09T20:30:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:30:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:01 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:30:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:01 vm05 ceph-mon[61345]: pgmap v682: 324 pgs: 41 unknown, 283 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:30:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:01 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_tier","val": "test-rados-api-vm05-94573-89-test-flush"}]': finished 2026-03-09T20:30:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:01 vm05 ceph-mon[51870]: osdmap e460: 8 total, 8 up, 8 in 2026-03-09T20:30:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:30:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:01 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:30:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:01 vm05 ceph-mon[51870]: pgmap v682: 324 pgs: 41 unknown, 283 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:30:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:01 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_tier","val": "test-rados-api-vm05-94573-89-test-flush"}]': finished 2026-03-09T20:30:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:01 vm09 ceph-mon[54524]: osdmap e460: 8 total, 8 up, 8 in 2026-03-09T20:30:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:30:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:01 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:30:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:01 vm09 ceph-mon[54524]: pgmap v682: 324 pgs: 41 unknown, 283 active+clean; 8.3 MiB data, 929 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:30:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:02 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:30:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:30:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:02 vm05 ceph-mon[61345]: osdmap e461: 8 total, 8 up, 8 in 2026-03-09T20:30:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:02 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:30:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:02 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:30:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:02 vm05 ceph-mon[61345]: osdmap e462: 8 total, 8 up, 8 in 2026-03-09T20:30:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:02 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:30:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:30:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:02 vm05 ceph-mon[51870]: osdmap e461: 8 total, 8 up, 8 in 2026-03-09T20:30:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:02 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:30:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:02 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:30:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:02 vm05 ceph-mon[51870]: osdmap e462: 8 total, 8 up, 8 in 2026-03-09T20:30:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:02 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:30:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:30:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:02 vm09 ceph-mon[54524]: osdmap e461: 8 total, 8 up, 8 in 2026-03-09T20:30:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:02 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:30:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:02 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:30:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:02 vm09 ceph-mon[54524]: osdmap e462: 8 total, 8 up, 8 in 2026-03-09T20:30:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:03 vm05 ceph-mon[61345]: pgmap v685: 324 pgs: 324 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:30:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:03 vm05 ceph-mon[51870]: pgmap v685: 324 pgs: 324 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:30:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:03 vm09 ceph-mon[54524]: pgmap v685: 324 pgs: 324 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:30:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:04 vm05 ceph-mon[61345]: osdmap e463: 8 total, 8 up, 8 in 2026-03-09T20:30:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:04 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-87"}]: dispatch 2026-03-09T20:30:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:04 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-87"}]: dispatch 2026-03-09T20:30:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:04 vm05 ceph-mon[51870]: osdmap e463: 8 total, 8 up, 8 in 2026-03-09T20:30:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:04 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-87"}]: dispatch 2026-03-09T20:30:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:04 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-87"}]: dispatch 2026-03-09T20:30:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:04 vm09 ceph-mon[54524]: osdmap e463: 8 total, 8 up, 8 in 2026-03-09T20:30:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:04 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-87"}]: dispatch 2026-03-09T20:30:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:04 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-87"}]: dispatch 2026-03-09T20:30:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:05 vm09 ceph-mon[54524]: osdmap e464: 8 total, 8 up, 8 in 2026-03-09T20:30:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:05 vm09 ceph-mon[54524]: pgmap v688: 260 pgs: 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:30:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:05 vm05 ceph-mon[61345]: osdmap e464: 8 total, 8 up, 8 in 2026-03-09T20:30:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:05 vm05 ceph-mon[61345]: pgmap v688: 260 pgs: 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:30:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:05 vm05 ceph-mon[51870]: osdmap e464: 8 total, 8 up, 8 in 2026-03-09T20:30:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:05 vm05 ceph-mon[51870]: pgmap v688: 260 pgs: 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:30:06.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:30:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:30:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:06 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:30:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:06 vm09 ceph-mon[54524]: osdmap e465: 8 total, 8 up, 8 in 2026-03-09T20:30:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:06 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:06 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:30:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:06 vm05 ceph-mon[61345]: osdmap e465: 8 total, 8 up, 8 in 2026-03-09T20:30:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:06 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:06 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:30:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:06 vm05 ceph-mon[51870]: osdmap e465: 8 total, 8 up, 8 in 2026-03-09T20:30:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:06 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:07 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:07 vm05 ceph-mon[61345]: pgmap v690: 292 pgs: 23 creating+peering, 9 unknown, 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 241 B/s wr, 2 op/s 2026-03-09T20:30:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:07 vm05 ceph-mon[61345]: osdmap e466: 8 total, 8 up, 8 in 2026-03-09T20:30:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:30:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:30:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:07 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:30:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:07 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:07 vm05 ceph-mon[51870]: pgmap v690: 292 pgs: 23 creating+peering, 9 unknown, 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 241 B/s wr, 2 op/s 2026-03-09T20:30:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:07 vm05 ceph-mon[51870]: osdmap e466: 8 total, 8 up, 8 in 2026-03-09T20:30:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:30:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:30:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:07 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:30:07.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:07 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:07.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:07 vm09 ceph-mon[54524]: pgmap v690: 292 pgs: 23 creating+peering, 9 unknown, 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 241 B/s wr, 2 op/s 2026-03-09T20:30:07.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:07 vm09 ceph-mon[54524]: osdmap e466: 8 total, 8 up, 8 in 2026-03-09T20:30:07.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:30:07.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:30:07.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:07 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:30:08.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:08 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:30:08.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:08 vm05 ceph-mon[61345]: osdmap e467: 8 total, 8 up, 8 in 2026-03-09T20:30:08.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:30:08.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:08 vm05 ceph-mon[51870]: osdmap e467: 8 total, 8 up, 8 in 2026-03-09T20:30:08.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:30:08.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:08 vm09 ceph-mon[54524]: osdmap e467: 8 total, 8 up, 8 in 2026-03-09T20:30:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:30:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:30:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:30:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:09 vm05 ceph-mon[61345]: pgmap v693: 292 pgs: 23 creating+peering, 9 unknown, 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 243 B/s wr, 2 op/s 2026-03-09T20:30:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:09 vm05 ceph-mon[61345]: osdmap e468: 8 total, 8 up, 8 in 2026-03-09T20:30:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:09 vm05 ceph-mon[51870]: pgmap v693: 292 pgs: 23 creating+peering, 9 unknown, 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 243 B/s wr, 2 op/s 2026-03-09T20:30:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:09 vm05 ceph-mon[51870]: osdmap e468: 8 total, 8 up, 8 in 2026-03-09T20:30:09.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:09 vm09 ceph-mon[54524]: pgmap v693: 292 pgs: 23 creating+peering, 9 unknown, 260 active+clean; 8.3 MiB data, 930 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 243 B/s wr, 2 op/s 2026-03-09T20:30:09.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:09 vm09 ceph-mon[54524]: osdmap e468: 8 total, 8 up, 8 in 2026-03-09T20:30:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:10 vm05 ceph-mon[61345]: osdmap e469: 8 total, 8 up, 8 in 2026-03-09T20:30:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:10 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-90"}]: dispatch 2026-03-09T20:30:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:10 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-90"}]: dispatch 2026-03-09T20:30:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:10 vm05 ceph-mon[51870]: osdmap e469: 8 total, 8 up, 8 in 2026-03-09T20:30:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:10 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-90"}]: dispatch 2026-03-09T20:30:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:10 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-90"}]: dispatch 2026-03-09T20:30:10.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:10 vm09 ceph-mon[54524]: osdmap e469: 8 total, 8 up, 8 in 2026-03-09T20:30:10.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:10.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:10 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:10.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-90"}]: dispatch 2026-03-09T20:30:10.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:10 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-90"}]: dispatch 2026-03-09T20:30:11.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:11 vm05 ceph-mon[61345]: pgmap v696: 292 pgs: 23 creating+peering, 269 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:30:11.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:11 vm05 ceph-mon[61345]: osdmap e470: 8 total, 8 up, 8 in 2026-03-09T20:30:11.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:11 vm05 ceph-mon[51870]: pgmap v696: 292 pgs: 23 creating+peering, 269 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:30:11.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:11 vm05 ceph-mon[51870]: osdmap e470: 8 total, 8 up, 8 in 2026-03-09T20:30:11.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:11 vm09 ceph-mon[54524]: pgmap v696: 292 pgs: 23 creating+peering, 269 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:30:11.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:11 vm09 ceph-mon[54524]: osdmap e470: 8 total, 8 up, 8 in 2026-03-09T20:30:12.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:12 vm09 ceph-mon[54524]: osdmap e471: 8 total, 8 up, 8 in 2026-03-09T20:30:12.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:12.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:12 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:12.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:12 vm05 ceph-mon[61345]: osdmap e471: 8 total, 8 up, 8 in 2026-03-09T20:30:12.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:12.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:12 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:12.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:12 vm05 ceph-mon[51870]: osdmap e471: 8 total, 8 up, 8 in 2026-03-09T20:30:12.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:12.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:12 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:13 vm05 ceph-mon[61345]: pgmap v699: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T20:30:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:13 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:30:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:13 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:13 vm05 ceph-mon[61345]: osdmap e472: 8 total, 8 up, 8 in 2026-03-09T20:30:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:30:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:13 vm05 ceph-mon[51870]: pgmap v699: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T20:30:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:13 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:30:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:13 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:13 vm05 ceph-mon[51870]: osdmap e472: 8 total, 8 up, 8 in 2026-03-09T20:30:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:30:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:13 vm09 ceph-mon[54524]: pgmap v699: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T20:30:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:13 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:30:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:13 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:13 vm09 ceph-mon[54524]: osdmap e472: 8 total, 8 up, 8 in 2026-03-09T20:30:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:30:15.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:14 vm05 ceph-mon[61345]: osdmap e473: 8 total, 8 up, 8 in 2026-03-09T20:30:15.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:14 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:30:15.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:14 vm05 ceph-mon[51870]: osdmap e473: 8 total, 8 up, 8 in 2026-03-09T20:30:15.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:14 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:30:15.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:14 vm09 ceph-mon[54524]: osdmap e473: 8 total, 8 up, 8 in 2026-03-09T20:30:15.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:14 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:30:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:15 vm09 ceph-mon[54524]: pgmap v702: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 0 op/s 2026-03-09T20:30:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:15 vm09 ceph-mon[54524]: osdmap e474: 8 total, 8 up, 8 in 2026-03-09T20:30:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:15 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-92"}]: dispatch 2026-03-09T20:30:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:15 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-92"}]: dispatch 2026-03-09T20:30:16.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:30:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:30:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:15 vm05 ceph-mon[61345]: pgmap v702: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 0 op/s 2026-03-09T20:30:16.169 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:15 vm05 ceph-mon[61345]: osdmap e474: 8 total, 8 up, 8 in 2026-03-09T20:30:16.169 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:16.169 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:15 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:16.169 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-92"}]: dispatch 2026-03-09T20:30:16.169 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:15 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-92"}]: dispatch 2026-03-09T20:30:16.170 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:15 vm05 ceph-mon[51870]: pgmap v702: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 931 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s wr, 0 op/s 2026-03-09T20:30:16.170 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:15 vm05 ceph-mon[51870]: osdmap e474: 8 total, 8 up, 8 in 2026-03-09T20:30:16.170 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:16.170 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:15 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:16.170 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-92"}]: dispatch 2026-03-09T20:30:16.170 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:15 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-92"}]: dispatch 2026-03-09T20:30:17.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:17.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:16 vm09 ceph-mon[54524]: osdmap e475: 8 total, 8 up, 8 in 2026-03-09T20:30:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:16 vm05 ceph-mon[61345]: osdmap e475: 8 total, 8 up, 8 in 2026-03-09T20:30:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:16 vm05 ceph-mon[51870]: osdmap e475: 8 total, 8 up, 8 in 2026-03-09T20:30:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:17 vm09 ceph-mon[54524]: pgmap v705: 260 pgs: 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:30:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:17 vm09 ceph-mon[54524]: osdmap e476: 8 total, 8 up, 8 in 2026-03-09T20:30:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:17 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:17 vm05 ceph-mon[61345]: pgmap v705: 260 pgs: 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:30:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:17 vm05 ceph-mon[61345]: osdmap e476: 8 total, 8 up, 8 in 2026-03-09T20:30:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:17 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:17 vm05 ceph-mon[51870]: pgmap v705: 260 pgs: 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:30:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:17 vm05 ceph-mon[51870]: osdmap e476: 8 total, 8 up, 8 in 2026-03-09T20:30:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:17 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:30:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:30:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:30:19.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:18 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:19.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:18 vm09 ceph-mon[54524]: osdmap e477: 8 total, 8 up, 8 in 2026-03-09T20:30:19.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:30:19.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:18 vm09 ceph-mon[54524]: pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:30:19.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:18 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:19.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:18 vm05 ceph-mon[61345]: osdmap e477: 8 total, 8 up, 8 in 2026-03-09T20:30:19.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:30:19.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:18 vm05 ceph-mon[61345]: pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:30:19.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:18 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:19.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:18 vm05 ceph-mon[51870]: osdmap e477: 8 total, 8 up, 8 in 2026-03-09T20:30:19.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:30:19.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:18 vm05 ceph-mon[51870]: pgmap v708: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:30:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:19 vm09 ceph-mon[54524]: osdmap e478: 8 total, 8 up, 8 in 2026-03-09T20:30:20.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:19 vm05 ceph-mon[61345]: osdmap e478: 8 total, 8 up, 8 in 2026-03-09T20:30:20.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:19 vm05 ceph-mon[51870]: osdmap e478: 8 total, 8 up, 8 in 2026-03-09T20:30:21.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:21 vm05 ceph-mon[61345]: osdmap e479: 8 total, 8 up, 8 in 2026-03-09T20:30:21.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:21 vm05 ceph-mon[61345]: pgmap v711: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 6 op/s 2026-03-09T20:30:21.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:21 vm05 ceph-mon[51870]: osdmap e479: 8 total, 8 up, 8 in 2026-03-09T20:30:21.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:21 vm05 ceph-mon[51870]: pgmap v711: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 6 op/s 2026-03-09T20:30:21.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:21 vm09 ceph-mon[54524]: osdmap e479: 8 total, 8 up, 8 in 2026-03-09T20:30:21.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:21 vm09 ceph-mon[54524]: pgmap v711: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 6 op/s 2026-03-09T20:30:22.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:22 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:30:22.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:22 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:30:22.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:22 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:30:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:23 vm09 ceph-mon[54524]: pgmap v712: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-09T20:30:23.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:23 vm05 ceph-mon[61345]: pgmap v712: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-09T20:30:23.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:23 vm05 ceph-mon[51870]: pgmap v712: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-09T20:30:25.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:25 vm05 ceph-mon[61345]: pgmap v713: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-09T20:30:25.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:25 vm05 ceph-mon[51870]: pgmap v713: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-09T20:30:25.687 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:25 vm09 ceph-mon[54524]: pgmap v713: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-09T20:30:26.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:30:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:30:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:27.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:27 vm05 ceph-mon[61345]: pgmap v714: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T20:30:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:27 vm05 ceph-mon[51870]: pgmap v714: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T20:30:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:27 vm09 ceph-mon[54524]: pgmap v714: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T20:30:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:30:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:30:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:30:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:29 vm09 ceph-mon[54524]: pgmap v715: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 877 B/s wr, 3 op/s 2026-03-09T20:30:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:29 vm05 ceph-mon[61345]: pgmap v715: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 877 B/s wr, 3 op/s 2026-03-09T20:30:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:29 vm05 ceph-mon[51870]: pgmap v715: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 877 B/s wr, 3 op/s 2026-03-09T20:30:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:30:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:30:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:30:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:31 vm05 ceph-mon[61345]: pgmap v716: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 792 B/s wr, 3 op/s 2026-03-09T20:30:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:31 vm05 ceph-mon[61345]: osdmap e480: 8 total, 8 up, 8 in 2026-03-09T20:30:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:31 vm05 ceph-mon[51870]: pgmap v716: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 792 B/s wr, 3 op/s 2026-03-09T20:30:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:31 vm05 ceph-mon[51870]: osdmap e480: 8 total, 8 up, 8 in 2026-03-09T20:30:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:31 vm09 ceph-mon[54524]: pgmap v716: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 792 B/s wr, 3 op/s 2026-03-09T20:30:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:31 vm09 ceph-mon[54524]: osdmap e480: 8 total, 8 up, 8 in 2026-03-09T20:30:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:33 vm09 ceph-mon[54524]: pgmap v718: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:30:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:33 vm05 ceph-mon[61345]: pgmap v718: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:30:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:33 vm05 ceph-mon[51870]: pgmap v718: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:30:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:35 vm05 ceph-mon[61345]: pgmap v719: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:30:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:35 vm05 ceph-mon[51870]: pgmap v719: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:30:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:35 vm09 ceph-mon[54524]: pgmap v719: 292 pgs: 292 active+clean; 8.3 MiB data, 932 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:30:36.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:30:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:30:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:37 vm05 ceph-mon[61345]: pgmap v720: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-09T20:30:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:37 vm05 ceph-mon[51870]: pgmap v720: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-09T20:30:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:37 vm09 ceph-mon[54524]: pgmap v720: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-09T20:30:38.911 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:30:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:30:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:30:39.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:39 vm09 ceph-mon[54524]: pgmap v721: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-09T20:30:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:39 vm05 ceph-mon[51870]: pgmap v721: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-09T20:30:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:39 vm05 ceph-mon[61345]: pgmap v721: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-09T20:30:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-94"}]: dispatch 2026-03-09T20:30:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:40 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-94"}]: dispatch 2026-03-09T20:30:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-94"}]: dispatch 2026-03-09T20:30:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:40 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-94"}]: dispatch 2026-03-09T20:30:41.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:41.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:41.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-94"}]: dispatch 2026-03-09T20:30:41.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:40 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-94"}]: dispatch 2026-03-09T20:30:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:41 vm05 ceph-mon[61345]: pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-09T20:30:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:41 vm05 ceph-mon[61345]: osdmap e481: 8 total, 8 up, 8 in 2026-03-09T20:30:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:41 vm05 ceph-mon[61345]: osdmap e482: 8 total, 8 up, 8 in 2026-03-09T20:30:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:41 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:41 vm05 ceph-mon[51870]: pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-09T20:30:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:41 vm05 ceph-mon[51870]: osdmap e481: 8 total, 8 up, 8 in 2026-03-09T20:30:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:41 vm05 ceph-mon[51870]: osdmap e482: 8 total, 8 up, 8 in 2026-03-09T20:30:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:41 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:41 vm09 ceph-mon[54524]: pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-09T20:30:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:41 vm09 ceph-mon[54524]: osdmap e481: 8 total, 8 up, 8 in 2026-03-09T20:30:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:41 vm09 ceph-mon[54524]: osdmap e482: 8 total, 8 up, 8 in 2026-03-09T20:30:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:41 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:43.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:43 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:43.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:43 vm05 ceph-mon[61345]: osdmap e483: 8 total, 8 up, 8 in 2026-03-09T20:30:43.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:43 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:30:43.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:43 vm05 ceph-mon[61345]: pgmap v726: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:30:43.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:43 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:43.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:43 vm05 ceph-mon[51870]: osdmap e483: 8 total, 8 up, 8 in 2026-03-09T20:30:43.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:43 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:30:43.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:43 vm05 ceph-mon[51870]: pgmap v726: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:30:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:43 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:43 vm09 ceph-mon[54524]: osdmap e483: 8 total, 8 up, 8 in 2026-03-09T20:30:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:43 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:30:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:43 vm09 ceph-mon[54524]: pgmap v726: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:30:44.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:44 vm05 ceph-mon[61345]: osdmap e484: 8 total, 8 up, 8 in 2026-03-09T20:30:44.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:44 vm05 ceph-mon[51870]: osdmap e484: 8 total, 8 up, 8 in 2026-03-09T20:30:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:44 vm09 ceph-mon[54524]: osdmap e484: 8 total, 8 up, 8 in 2026-03-09T20:30:45.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:45 vm05 ceph-mon[61345]: pgmap v728: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:30:45.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:30:45.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:45 vm05 ceph-mon[51870]: pgmap v728: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:30:45.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:30:45.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:45 vm09 ceph-mon[54524]: pgmap v728: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 936 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:30:45.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:30:46.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:30:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:30:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:47.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:47 vm05 ceph-mon[61345]: pgmap v729: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 891 B/s rd, 1.0 KiB/s wr, 2 op/s 2026-03-09T20:30:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:47 vm05 ceph-mon[51870]: pgmap v729: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 891 B/s rd, 1.0 KiB/s wr, 2 op/s 2026-03-09T20:30:47.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:47 vm09 ceph-mon[54524]: pgmap v729: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 891 B/s rd, 1.0 KiB/s wr, 2 op/s 2026-03-09T20:30:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:30:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:30:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:30:49.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:49 vm05 ceph-mon[61345]: pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 707 B/s rd, 849 B/s wr, 2 op/s 2026-03-09T20:30:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:49 vm05 ceph-mon[51870]: pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 707 B/s rd, 849 B/s wr, 2 op/s 2026-03-09T20:30:49.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:49 vm09 ceph-mon[54524]: pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 707 B/s rd, 849 B/s wr, 2 op/s 2026-03-09T20:30:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:51 vm09 ceph-mon[54524]: pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 746 B/s wr, 2 op/s 2026-03-09T20:30:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:51 vm05 ceph-mon[61345]: pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 746 B/s wr, 2 op/s 2026-03-09T20:30:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:51 vm05 ceph-mon[51870]: pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 746 B/s wr, 2 op/s 2026-03-09T20:30:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:52 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:30:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:52 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:30:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:52 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:30:53.410 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:30:53 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3[81622]: 2026-03-09T20:30:53.148+0000 7f15eb0f6640 -1 snap_mapper.add_oid found existing snaps mapped on 103:82057baf:test-rados-api-vm05-94573-97::foo:21, removing 2026-03-09T20:30:53.487 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:30:53 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:30:53.147+0000 7fda98ad2640 -1 snap_mapper.add_oid found existing snaps mapped on 103:82057baf:test-rados-api-vm05-94573-97::foo:21, removing 2026-03-09T20:30:53.487 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:30:53 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:30:53.147+0000 7f0c3116f640 -1 snap_mapper.add_oid found existing snaps mapped on 103:82057baf:test-rados-api-vm05-94573-97::foo:21, removing 2026-03-09T20:30:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:53 vm09 ceph-mon[54524]: pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 2 op/s 2026-03-09T20:30:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:53 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:30:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:53 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:30:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:53 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:30:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:53 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:30:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:53 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:30:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:53 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:30:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:53 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:30:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:53 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-96"}]: dispatch 2026-03-09T20:30:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:53 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-96"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[61345]: pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 2 op/s 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-96"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-96"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[51870]: pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 614 B/s wr, 2 op/s 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:30:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:30:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-96"}]: dispatch 2026-03-09T20:30:53.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:53 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-96"}]: dispatch 2026-03-09T20:30:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:54 vm05 ceph-mon[61345]: osdmap e485: 8 total, 8 up, 8 in 2026-03-09T20:30:54.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:54 vm05 ceph-mon[51870]: osdmap e485: 8 total, 8 up, 8 in 2026-03-09T20:30:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:54 vm09 ceph-mon[54524]: osdmap e485: 8 total, 8 up, 8 in 2026-03-09T20:30:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:55 vm09 ceph-mon[54524]: pgmap v734: 260 pgs: 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T20:30:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:55 vm09 ceph-mon[54524]: osdmap e486: 8 total, 8 up, 8 in 2026-03-09T20:30:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:55 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:56.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:30:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:30:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:55 vm05 ceph-mon[61345]: pgmap v734: 260 pgs: 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T20:30:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:55 vm05 ceph-mon[61345]: osdmap e486: 8 total, 8 up, 8 in 2026-03-09T20:30:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:55 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:55 vm05 ceph-mon[51870]: pgmap v734: 260 pgs: 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-09T20:30:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:55 vm05 ceph-mon[51870]: osdmap e486: 8 total, 8 up, 8 in 2026-03-09T20:30:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:55 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:30:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:56 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:56 vm09 ceph-mon[54524]: osdmap e487: 8 total, 8 up, 8 in 2026-03-09T20:30:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:30:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:56 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:30:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:56 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:56 vm05 ceph-mon[61345]: osdmap e487: 8 total, 8 up, 8 in 2026-03-09T20:30:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:30:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:56 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:30:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:56 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:30:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:56 vm05 ceph-mon[51870]: osdmap e487: 8 total, 8 up, 8 in 2026-03-09T20:30:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:30:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:56 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:30:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:30:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:57 vm09 ceph-mon[54524]: pgmap v737: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:30:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:57 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:30:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:57 vm09 ceph-mon[54524]: osdmap e488: 8 total, 8 up, 8 in 2026-03-09T20:30:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-98", "mode": "writeback"}]: dispatch 2026-03-09T20:30:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:57 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-98", "mode": "writeback"}]: dispatch 2026-03-09T20:30:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:57 vm05 ceph-mon[61345]: pgmap v737: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:30:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:57 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:30:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:57 vm05 ceph-mon[61345]: osdmap e488: 8 total, 8 up, 8 in 2026-03-09T20:30:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-98", "mode": "writeback"}]: dispatch 2026-03-09T20:30:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:57 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-98", "mode": "writeback"}]: dispatch 2026-03-09T20:30:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:57 vm05 ceph-mon[51870]: pgmap v737: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:30:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:57 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:30:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:57 vm05 ceph-mon[51870]: osdmap e488: 8 total, 8 up, 8 in 2026-03-09T20:30:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-98", "mode": "writeback"}]: dispatch 2026-03-09T20:30:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:57 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-98", "mode": "writeback"}]: dispatch 2026-03-09T20:30:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:58 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:30:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:58 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-98", "mode": "writeback"}]': finished 2026-03-09T20:30:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:58 vm05 ceph-mon[61345]: osdmap e489: 8 total, 8 up, 8 in 2026-03-09T20:30:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-98"}]: dispatch 2026-03-09T20:30:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:58 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-98"}]: dispatch 2026-03-09T20:30:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:58 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:30:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:58 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-98", "mode": "writeback"}]': finished 2026-03-09T20:30:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:58 vm05 ceph-mon[51870]: osdmap e489: 8 total, 8 up, 8 in 2026-03-09T20:30:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-98"}]: dispatch 2026-03-09T20:30:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:58 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-98"}]: dispatch 2026-03-09T20:30:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:30:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:30:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:30:58.943 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:58 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:30:58.943 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:58 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-98", "mode": "writeback"}]': finished 2026-03-09T20:30:58.943 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:58 vm09 ceph-mon[54524]: osdmap e489: 8 total, 8 up, 8 in 2026-03-09T20:30:58.943 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-98"}]: dispatch 2026-03-09T20:30:58.943 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:58 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-98"}]: dispatch 2026-03-09T20:30:58.944 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:30:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=cleanup t=2026-03-09T20:30:58.786062655Z level=info msg="Completed cleanup jobs" duration=1.902675ms 2026-03-09T20:30:59.273 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:30:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=plugins.update.checker t=2026-03-09T20:30:58.944126618Z level=info msg="Update check succeeded" duration=49.77878ms 2026-03-09T20:31:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:59 vm09 ceph-mon[54524]: pgmap v740: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:31:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:59 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-98"}]': finished 2026-03-09T20:31:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:31:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:59 vm09 ceph-mon[54524]: osdmap e490: 8 total, 8 up, 8 in 2026-03-09T20:31:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:30:59 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:31:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:59 vm05 ceph-mon[61345]: pgmap v740: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:31:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:59 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-98"}]': finished 2026-03-09T20:31:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:31:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:59 vm05 ceph-mon[61345]: osdmap e490: 8 total, 8 up, 8 in 2026-03-09T20:31:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:30:59 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:31:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:59 vm05 ceph-mon[51870]: pgmap v740: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:31:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:59 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-98"}]': finished 2026-03-09T20:31:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:31:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:59 vm05 ceph-mon[51870]: osdmap e490: 8 total, 8 up, 8 in 2026-03-09T20:31:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:30:59 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:31:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:00 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:31:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:00 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:31:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:00 vm09 ceph-mon[54524]: osdmap e491: 8 total, 8 up, 8 in 2026-03-09T20:31:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T20:31:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:00 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T20:31:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:31:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:00 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:31:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:00 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:31:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:00 vm05 ceph-mon[61345]: osdmap e491: 8 total, 8 up, 8 in 2026-03-09T20:31:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T20:31:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:00 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T20:31:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:31:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:00 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:31:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:00 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:31:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:00 vm05 ceph-mon[51870]: osdmap e491: 8 total, 8 up, 8 in 2026-03-09T20:31:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T20:31:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:00 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-09T20:31:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:31:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:01 vm09 ceph-mon[54524]: pgmap v743: 292 pgs: 18 creating+peering, 274 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:31:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:01 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T20:31:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:01 vm09 ceph-mon[54524]: osdmap e492: 8 total, 8 up, 8 in 2026-03-09T20:31:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:01 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:01 vm05 ceph-mon[61345]: pgmap v743: 292 pgs: 18 creating+peering, 274 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:31:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:01 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T20:31:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:01 vm05 ceph-mon[61345]: osdmap e492: 8 total, 8 up, 8 in 2026-03-09T20:31:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:01 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:01 vm05 ceph-mon[51870]: pgmap v743: 292 pgs: 18 creating+peering, 274 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:31:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:01 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_count","val": "1"}]': finished 2026-03-09T20:31:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:01 vm05 ceph-mon[51870]: osdmap e492: 8 total, 8 up, 8 in 2026-03-09T20:31:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:01 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:03.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:02 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:31:03.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:02 vm09 ceph-mon[54524]: osdmap e493: 8 total, 8 up, 8 in 2026-03-09T20:31:03.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T20:31:03.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:02 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T20:31:03.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:02 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:31:03.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:02 vm05 ceph-mon[61345]: osdmap e493: 8 total, 8 up, 8 in 2026-03-09T20:31:03.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T20:31:03.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:02 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T20:31:03.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:02 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:31:03.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:02 vm05 ceph-mon[51870]: osdmap e493: 8 total, 8 up, 8 in 2026-03-09T20:31:03.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T20:31:03.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:02 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-09T20:31:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:03 vm05 ceph-mon[61345]: pgmap v746: 292 pgs: 292 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:31:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:03 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T20:31:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:03 vm05 ceph-mon[61345]: osdmap e494: 8 total, 8 up, 8 in 2026-03-09T20:31:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:03 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:03 vm05 ceph-mon[51870]: pgmap v746: 292 pgs: 292 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:31:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:03 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T20:31:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:03 vm05 ceph-mon[51870]: osdmap e494: 8 total, 8 up, 8 in 2026-03-09T20:31:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:03 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:03 vm09 ceph-mon[54524]: pgmap v746: 292 pgs: 292 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:31:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:03 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-98","var": "target_max_objects","val": "250"}]': finished 2026-03-09T20:31:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:03 vm09 ceph-mon[54524]: osdmap e494: 8 total, 8 up, 8 in 2026-03-09T20:31:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:03 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:04 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:31:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:04 vm05 ceph-mon[61345]: osdmap e495: 8 total, 8 up, 8 in 2026-03-09T20:31:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98"}]: dispatch 2026-03-09T20:31:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:04 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98"}]: dispatch 2026-03-09T20:31:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:04 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98"}]': finished 2026-03-09T20:31:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:04 vm05 ceph-mon[61345]: osdmap e496: 8 total, 8 up, 8 in 2026-03-09T20:31:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:04 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:31:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:04 vm05 ceph-mon[51870]: osdmap e495: 8 total, 8 up, 8 in 2026-03-09T20:31:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98"}]: dispatch 2026-03-09T20:31:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:04 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98"}]: dispatch 2026-03-09T20:31:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:04 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98"}]': finished 2026-03-09T20:31:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:04 vm05 ceph-mon[51870]: osdmap e496: 8 total, 8 up, 8 in 2026-03-09T20:31:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:04 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:31:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:04 vm09 ceph-mon[54524]: osdmap e495: 8 total, 8 up, 8 in 2026-03-09T20:31:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98"}]: dispatch 2026-03-09T20:31:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:04 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98"}]: dispatch 2026-03-09T20:31:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:04 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-98"}]': finished 2026-03-09T20:31:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:04 vm09 ceph-mon[54524]: osdmap e496: 8 total, 8 up, 8 in 2026-03-09T20:31:06.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:05 vm09 ceph-mon[54524]: pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:06.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:31:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:31:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:05 vm05 ceph-mon[61345]: pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:05 vm05 ceph-mon[51870]: pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:06 vm05 ceph-mon[61345]: osdmap e497: 8 total, 8 up, 8 in 2026-03-09T20:31:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:06 vm05 ceph-mon[51870]: osdmap e497: 8 total, 8 up, 8 in 2026-03-09T20:31:07.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:07.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:06 vm09 ceph-mon[54524]: osdmap e497: 8 total, 8 up, 8 in 2026-03-09T20:31:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:07 vm05 ceph-mon[61345]: pgmap v752: 260 pgs: 260 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:07 vm05 ceph-mon[61345]: osdmap e498: 8 total, 8 up, 8 in 2026-03-09T20:31:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:07 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:07 vm05 ceph-mon[51870]: pgmap v752: 260 pgs: 260 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:07 vm05 ceph-mon[51870]: osdmap e498: 8 total, 8 up, 8 in 2026-03-09T20:31:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:07 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:07 vm09 ceph-mon[54524]: pgmap v752: 260 pgs: 260 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:07 vm09 ceph-mon[54524]: osdmap e498: 8 total, 8 up, 8 in 2026-03-09T20:31:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:07 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:08.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:08 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:08.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:08 vm05 ceph-mon[61345]: osdmap e499: 8 total, 8 up, 8 in 2026-03-09T20:31:08.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:31:08.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:08 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:31:08.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:08.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:08 vm05 ceph-mon[51870]: osdmap e499: 8 total, 8 up, 8 in 2026-03-09T20:31:08.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:31:08.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:08 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:31:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:31:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:31:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:31:09.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:09.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:08 vm09 ceph-mon[54524]: osdmap e499: 8 total, 8 up, 8 in 2026-03-09T20:31:09.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:31:09.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:08 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:31:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:09 vm05 ceph-mon[61345]: pgmap v755: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:09 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:31:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:09 vm05 ceph-mon[61345]: osdmap e500: 8 total, 8 up, 8 in 2026-03-09T20:31:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-100"}]: dispatch 2026-03-09T20:31:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:09 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-100"}]: dispatch 2026-03-09T20:31:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:09 vm05 ceph-mon[51870]: pgmap v755: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:09 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:31:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:09 vm05 ceph-mon[51870]: osdmap e500: 8 total, 8 up, 8 in 2026-03-09T20:31:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-100"}]: dispatch 2026-03-09T20:31:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:09 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-100"}]: dispatch 2026-03-09T20:31:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:09 vm09 ceph-mon[54524]: pgmap v755: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:09 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:31:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:09 vm09 ceph-mon[54524]: osdmap e500: 8 total, 8 up, 8 in 2026-03-09T20:31:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-100"}]: dispatch 2026-03-09T20:31:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:09 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-100"}]: dispatch 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-100"}]': finished 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-100", "mode": "writeback"}]: dispatch 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[61345]: osdmap e501: 8 total, 8 up, 8 in 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-100", "mode": "writeback"}]: dispatch 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-100", "mode": "writeback"}]': finished 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[61345]: osdmap e502: 8 total, 8 up, 8 in 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-100"}]': finished 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-100", "mode": "writeback"}]: dispatch 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[51870]: osdmap e501: 8 total, 8 up, 8 in 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-100", "mode": "writeback"}]: dispatch 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-100", "mode": "writeback"}]': finished 2026-03-09T20:31:11.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:10 vm05 ceph-mon[51870]: osdmap e502: 8 total, 8 up, 8 in 2026-03-09T20:31:11.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:10 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-6", "overlaypool": "test-rados-api-vm05-94573-100"}]': finished 2026-03-09T20:31:11.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-100", "mode": "writeback"}]: dispatch 2026-03-09T20:31:11.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:10 vm09 ceph-mon[54524]: osdmap e501: 8 total, 8 up, 8 in 2026-03-09T20:31:11.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:10 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-100", "mode": "writeback"}]: dispatch 2026-03-09T20:31:11.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:10 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:31:11.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:10 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-100", "mode": "writeback"}]': finished 2026-03-09T20:31:11.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:10 vm09 ceph-mon[54524]: osdmap e502: 8 total, 8 up, 8 in 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[61345]: pgmap v758: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[61345]: osdmap e503: 8 total, 8 up, 8 in 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[51870]: pgmap v758: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[51870]: osdmap e503: 8 total, 8 up, 8 in 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:11 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:11 vm09 ceph-mon[54524]: pgmap v758: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:31:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:11 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:31:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:11 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:31:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:11 vm09 ceph-mon[54524]: osdmap e503: 8 total, 8 up, 8 in 2026-03-09T20:31:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:11 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:13 vm05 ceph-mon[61345]: pgmap v761: 292 pgs: 292 active+clean; 8.3 MiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:13 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:31:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:13 vm05 ceph-mon[61345]: osdmap e504: 8 total, 8 up, 8 in 2026-03-09T20:31:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:31:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:13 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:31:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:13 vm05 ceph-mon[51870]: pgmap v761: 292 pgs: 292 active+clean; 8.3 MiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:13 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:31:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:13 vm05 ceph-mon[51870]: osdmap e504: 8 total, 8 up, 8 in 2026-03-09T20:31:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:31:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:13 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:31:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:13 vm09 ceph-mon[54524]: pgmap v761: 292 pgs: 292 active+clean; 8.3 MiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:13 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:31:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:13 vm09 ceph-mon[54524]: osdmap e504: 8 total, 8 up, 8 in 2026-03-09T20:31:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:31:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:13 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:31:15.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:14 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:31:15.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:14 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T20:31:15.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:14 vm09 ceph-mon[54524]: osdmap e505: 8 total, 8 up, 8 in 2026-03-09T20:31:15.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:14 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T20:31:15.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:14 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T20:31:15.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:14 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T20:31:15.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:14 vm09 ceph-mon[54524]: osdmap e506: 8 total, 8 up, 8 in 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[61345]: osdmap e505: 8 total, 8 up, 8 in 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[61345]: osdmap e506: 8 total, 8 up, 8 in 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[51870]: osdmap e505: 8 total, 8 up, 8 in 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-09T20:31:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:14 vm05 ceph-mon[51870]: osdmap e506: 8 total, 8 up, 8 in 2026-03-09T20:31:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:15 vm09 ceph-mon[54524]: pgmap v764: 292 pgs: 292 active+clean; 8.3 MiB data, 961 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:31:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:15 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:16.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:31:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:31:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:15 vm05 ceph-mon[61345]: pgmap v764: 292 pgs: 292 active+clean; 8.3 MiB data, 961 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:31:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:15 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:15 vm05 ceph-mon[51870]: pgmap v764: 292 pgs: 292 active+clean; 8.3 MiB data, 961 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:31:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:15 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:17.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:17.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:31:17.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100"}]: dispatch 2026-03-09T20:31:17.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:16 vm09 ceph-mon[54524]: osdmap e507: 8 total, 8 up, 8 in 2026-03-09T20:31:17.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100"}]: dispatch 2026-03-09T20:31:17.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:16 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100"}]': finished 2026-03-09T20:31:17.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:16 vm09 ceph-mon[54524]: osdmap e508: 8 total, 8 up, 8 in 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100"}]: dispatch 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[61345]: osdmap e507: 8 total, 8 up, 8 in 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100"}]: dispatch 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100"}]': finished 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[61345]: osdmap e508: 8 total, 8 up, 8 in 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]': finished 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100"}]: dispatch 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[51870]: osdmap e507: 8 total, 8 up, 8 in 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100"}]: dispatch 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-100"}]': finished 2026-03-09T20:31:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:16 vm05 ceph-mon[51870]: osdmap e508: 8 total, 8 up, 8 in 2026-03-09T20:31:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:17 vm09 ceph-mon[54524]: pgmap v767: 292 pgs: 292 active+clean; 8.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:31:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:17 vm05 ceph-mon[61345]: pgmap v767: 292 pgs: 292 active+clean; 8.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:31:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:17 vm05 ceph-mon[51870]: pgmap v767: 292 pgs: 292 active+clean; 8.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:31:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:31:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:31:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:31:19.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:18 vm09 ceph-mon[54524]: osdmap e509: 8 total, 8 up, 8 in 2026-03-09T20:31:19.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:18 vm09 ceph-mon[54524]: pgmap v770: 260 pgs: 260 active+clean; 8.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:31:19.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:18 vm05 ceph-mon[61345]: osdmap e509: 8 total, 8 up, 8 in 2026-03-09T20:31:19.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:18 vm05 ceph-mon[61345]: pgmap v770: 260 pgs: 260 active+clean; 8.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:31:19.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:18 vm05 ceph-mon[51870]: osdmap e509: 8 total, 8 up, 8 in 2026-03-09T20:31:19.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:18 vm05 ceph-mon[51870]: pgmap v770: 260 pgs: 260 active+clean; 8.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:31:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:19 vm09 ceph-mon[54524]: osdmap e510: 8 total, 8 up, 8 in 2026-03-09T20:31:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:19 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:20.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:19 vm05 ceph-mon[61345]: osdmap e510: 8 total, 8 up, 8 in 2026-03-09T20:31:20.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:20.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:19 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:20.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:19 vm05 ceph-mon[51870]: osdmap e510: 8 total, 8 up, 8 in 2026-03-09T20:31:20.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:20.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:19 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:21.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:20 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:21.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:20 vm09 ceph-mon[54524]: osdmap e511: 8 total, 8 up, 8 in 2026-03-09T20:31:21.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:20 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:31:21.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:20 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:31:21.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:20 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:31:21.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:20 vm09 ceph-mon[54524]: pgmap v773: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:21.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:20 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:21.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:20 vm05 ceph-mon[61345]: osdmap e511: 8 total, 8 up, 8 in 2026-03-09T20:31:21.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:20 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:31:21.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:20 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:31:21.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:20 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:31:21.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:20 vm05 ceph-mon[61345]: pgmap v773: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:21.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:20 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:21.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:20 vm05 ceph-mon[51870]: osdmap e511: 8 total, 8 up, 8 in 2026-03-09T20:31:21.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:20 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-09T20:31:21.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:20 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:31:21.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:20 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-09T20:31:21.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:20 vm05 ceph-mon[51870]: pgmap v773: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:22 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:31:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:22 vm09 ceph-mon[54524]: osdmap e512: 8 total, 8 up, 8 in 2026-03-09T20:31:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:31:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:22 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:31:22.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:22 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:31:22.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:22 vm05 ceph-mon[61345]: osdmap e512: 8 total, 8 up, 8 in 2026-03-09T20:31:22.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:31:22.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:22 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:31:22.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:22 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-09T20:31:22.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:22 vm05 ceph-mon[51870]: osdmap e512: 8 total, 8 up, 8 in 2026-03-09T20:31:22.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:31:22.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:22 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-09T20:31:23.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:23 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:31:23.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:23 vm09 ceph-mon[54524]: osdmap e513: 8 total, 8 up, 8 in 2026-03-09T20:31:23.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:31:23.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:23 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:31:23.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:23 vm09 ceph-mon[54524]: pgmap v776: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:23.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:23 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:31:23.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:23 vm09 ceph-mon[54524]: osdmap e514: 8 total, 8 up, 8 in 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[61345]: osdmap e513: 8 total, 8 up, 8 in 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[61345]: pgmap v776: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[61345]: osdmap e514: 8 total, 8 up, 8 in 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[51870]: osdmap e513: 8 total, 8 up, 8 in 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[51870]: pgmap v776: 292 pgs: 292 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-09T20:31:23.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:23 vm05 ceph-mon[51870]: osdmap e514: 8 total, 8 up, 8 in 2026-03-09T20:31:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:24 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:24 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:24 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-102"}]: dispatch 2026-03-09T20:31:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:24 vm05 ceph-mon[61345]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-102"}]: dispatch 2026-03-09T20:31:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:24 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:24 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:24 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-102"}]: dispatch 2026-03-09T20:31:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:24 vm05 ceph-mon[51870]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-102"}]: dispatch 2026-03-09T20:31:24.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:24 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:24.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:24 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-6"}]: dispatch 2026-03-09T20:31:24.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:24 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3797742204' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-102"}]: dispatch 2026-03-09T20:31:24.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:24 vm09 ceph-mon[54524]: from='client.25519 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-6", "tierpool": "test-rados-api-vm05-94573-102"}]: dispatch 2026-03-09T20:31:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:25 vm05 ceph-mon[61345]: osdmap e515: 8 total, 8 up, 8 in 2026-03-09T20:31:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:25 vm05 ceph-mon[61345]: pgmap v779: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:25 vm05 ceph-mon[51870]: osdmap e515: 8 total, 8 up, 8 in 2026-03-09T20:31:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:25 vm05 ceph-mon[51870]: pgmap v779: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:25 vm09 ceph-mon[54524]: osdmap e515: 8 total, 8 up, 8 in 2026-03-09T20:31:25.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:25 vm09 ceph-mon[54524]: pgmap v779: 260 pgs: 260 active+clean; 8.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:26.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:31:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:31:26.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:26 vm05 ceph-mon[61345]: osdmap e516: 8 total, 8 up, 8 in 2026-03-09T20:31:26.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:31:26.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:31:26.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-94573-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:31:26.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:26 vm05 ceph-mon[51870]: osdmap e516: 8 total, 8 up, 8 in 2026-03-09T20:31:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:31:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:31:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-94573-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:31:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:26 vm09 ceph-mon[54524]: osdmap e516: 8 total, 8 up, 8 in 2026-03-09T20:31:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:31:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:31:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-94573-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:31:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-94573-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:31:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:27 vm05 ceph-mon[61345]: osdmap e517: 8 total, 8 up, 8 in 2026-03-09T20:31:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-94573-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:31:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:27 vm05 ceph-mon[61345]: pgmap v782: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-94573-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:31:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:27 vm05 ceph-mon[51870]: osdmap e517: 8 total, 8 up, 8 in 2026-03-09T20:31:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-94573-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:31:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:27 vm05 ceph-mon[51870]: pgmap v782: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:27.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-94573-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:31:27.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:27 vm09 ceph-mon[54524]: osdmap e517: 8 total, 8 up, 8 in 2026-03-09T20:31:27.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-94573-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:31:27.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:27 vm09 ceph-mon[54524]: pgmap v782: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:28 vm05 ceph-mon[61345]: osdmap e518: 8 total, 8 up, 8 in 2026-03-09T20:31:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-94573-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-94573-104"}]': finished 2026-03-09T20:31:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:28 vm05 ceph-mon[61345]: osdmap e519: 8 total, 8 up, 8 in 2026-03-09T20:31:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:28 vm05 ceph-mon[51870]: osdmap e518: 8 total, 8 up, 8 in 2026-03-09T20:31:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-94573-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-94573-104"}]': finished 2026-03-09T20:31:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:28 vm05 ceph-mon[51870]: osdmap e519: 8 total, 8 up, 8 in 2026-03-09T20:31:28.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:28 vm09 ceph-mon[54524]: osdmap e518: 8 total, 8 up, 8 in 2026-03-09T20:31:28.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-94573-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-94573-104"}]': finished 2026-03-09T20:31:28.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:28 vm09 ceph-mon[54524]: osdmap e519: 8 total, 8 up, 8 in 2026-03-09T20:31:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:31:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:31:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:31:29.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:29 vm05 ceph-mon[61345]: pgmap v785: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:29.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:29 vm05 ceph-mon[61345]: osdmap e520: 8 total, 8 up, 8 in 2026-03-09T20:31:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:29 vm05 ceph-mon[51870]: pgmap v785: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:29 vm05 ceph-mon[51870]: osdmap e520: 8 total, 8 up, 8 in 2026-03-09T20:31:29.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:29 vm09 ceph-mon[54524]: pgmap v785: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:29.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:29 vm09 ceph-mon[54524]: osdmap e520: 8 total, 8 up, 8 in 2026-03-09T20:31:30.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:30 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:30.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:31:30.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:30 vm05 ceph-mon[61345]: osdmap e521: 8 total, 8 up, 8 in 2026-03-09T20:31:30.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:30.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:30 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:30.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:31:30.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:30 vm05 ceph-mon[51870]: osdmap e521: 8 total, 8 up, 8 in 2026-03-09T20:31:30.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:30.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:30 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:30.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:31:30.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:30 vm09 ceph-mon[54524]: osdmap e521: 8 total, 8 up, 8 in 2026-03-09T20:31:30.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:31.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:31 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:31.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:31 vm05 ceph-mon[61345]: pgmap v788: 268 pgs: 8 creating+peering, 29 unknown, 231 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:31.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:31 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:31.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:31 vm05 ceph-mon[61345]: osdmap e522: 8 total, 8 up, 8 in 2026-03-09T20:31:31.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:31 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:31.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:31 vm05 ceph-mon[51870]: pgmap v788: 268 pgs: 8 creating+peering, 29 unknown, 231 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:31.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:31 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:31.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:31 vm05 ceph-mon[51870]: osdmap e522: 8 total, 8 up, 8 in 2026-03-09T20:31:31.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:31 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:31.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:31 vm09 ceph-mon[54524]: pgmap v788: 268 pgs: 8 creating+peering, 29 unknown, 231 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:31.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:31 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:31.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:31 vm09 ceph-mon[54524]: osdmap e522: 8 total, 8 up, 8 in 2026-03-09T20:31:33.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:33 vm05 ceph-mon[61345]: osdmap e523: 8 total, 8 up, 8 in 2026-03-09T20:31:33.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:33.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:33 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:33.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:33 vm05 ceph-mon[61345]: pgmap v791: 300 pgs: 32 unknown, 32 creating+peering, 236 active+clean; 455 KiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:31:33.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:33 vm05 ceph-mon[51870]: osdmap e523: 8 total, 8 up, 8 in 2026-03-09T20:31:33.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:33.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:33 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:33.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:33 vm05 ceph-mon[51870]: pgmap v791: 300 pgs: 32 unknown, 32 creating+peering, 236 active+clean; 455 KiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:31:33.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:33 vm09 ceph-mon[54524]: osdmap e523: 8 total, 8 up, 8 in 2026-03-09T20:31:33.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:33.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:33 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:33.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:33 vm09 ceph-mon[54524]: pgmap v791: 300 pgs: 32 unknown, 32 creating+peering, 236 active+clean; 455 KiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:31:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:34 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:34 vm05 ceph-mon[61345]: osdmap e524: 8 total, 8 up, 8 in 2026-03-09T20:31:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:34 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:34 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:34 vm05 ceph-mon[51870]: osdmap e524: 8 total, 8 up, 8 in 2026-03-09T20:31:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:34 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:34.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:34 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:34.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:34 vm09 ceph-mon[54524]: osdmap e524: 8 total, 8 up, 8 in 2026-03-09T20:31:34.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:34.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:34 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:35.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:35 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]': finished 2026-03-09T20:31:35.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:35 vm05 ceph-mon[61345]: osdmap e525: 8 total, 8 up, 8 in 2026-03-09T20:31:35.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:35 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-107", "overlaypool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:35.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:35 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-107", "overlaypool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:35.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:35 vm05 ceph-mon[61345]: pgmap v794: 300 pgs: 32 unknown, 32 creating+peering, 236 active+clean; 455 KiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T20:31:35.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:35 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]': finished 2026-03-09T20:31:35.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:35 vm05 ceph-mon[51870]: osdmap e525: 8 total, 8 up, 8 in 2026-03-09T20:31:35.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:35 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-107", "overlaypool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:35.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:35 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-107", "overlaypool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:35.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:35 vm05 ceph-mon[51870]: pgmap v794: 300 pgs: 32 unknown, 32 creating+peering, 236 active+clean; 455 KiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T20:31:35.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:35 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]': finished 2026-03-09T20:31:35.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:35 vm09 ceph-mon[54524]: osdmap e525: 8 total, 8 up, 8 in 2026-03-09T20:31:35.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:35 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-107", "overlaypool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:35.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:35 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-107", "overlaypool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:35.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:35 vm09 ceph-mon[54524]: pgmap v794: 300 pgs: 32 unknown, 32 creating+peering, 236 active+clean; 455 KiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T20:31:36.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:31:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:31:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:36 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:36 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-107", "overlaypool": "test-rados-api-vm05-94573-107-cache"}]': finished 2026-03-09T20:31:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:36 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:31:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:36 vm05 ceph-mon[61345]: osdmap e526: 8 total, 8 up, 8 in 2026-03-09T20:31:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:36 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:31:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:36 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:36 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-107", "overlaypool": "test-rados-api-vm05-94573-107-cache"}]': finished 2026-03-09T20:31:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:36 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:31:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:36 vm05 ceph-mon[51870]: osdmap e526: 8 total, 8 up, 8 in 2026-03-09T20:31:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:36 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:31:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:36.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:36 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:36.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:36 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-107", "overlaypool": "test-rados-api-vm05-94573-107-cache"}]': finished 2026-03-09T20:31:36.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:36 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:31:36.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:36 vm09 ceph-mon[54524]: osdmap e526: 8 total, 8 up, 8 in 2026-03-09T20:31:36.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:36 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-107-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:31:36.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:37.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:37 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:31:37.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:37 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-107-cache", "mode": "writeback"}]': finished 2026-03-09T20:31:37.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:37 vm05 ceph-mon[61345]: osdmap e527: 8 total, 8 up, 8 in 2026-03-09T20:31:37.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:37 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-107"}]: dispatch 2026-03-09T20:31:37.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:37 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-107"}]: dispatch 2026-03-09T20:31:37.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:37 vm05 ceph-mon[61345]: pgmap v797: 300 pgs: 300 active+clean; 455 KiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:37.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:37 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:31:37.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:37 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-107-cache", "mode": "writeback"}]': finished 2026-03-09T20:31:37.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:37 vm05 ceph-mon[51870]: osdmap e527: 8 total, 8 up, 8 in 2026-03-09T20:31:37.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:37 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-107"}]: dispatch 2026-03-09T20:31:37.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:37 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-107"}]: dispatch 2026-03-09T20:31:37.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:37 vm05 ceph-mon[51870]: pgmap v797: 300 pgs: 300 active+clean; 455 KiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:37.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:37 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:31:37.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:37 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-107-cache", "mode": "writeback"}]': finished 2026-03-09T20:31:37.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:37 vm09 ceph-mon[54524]: osdmap e527: 8 total, 8 up, 8 in 2026-03-09T20:31:37.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:37 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-107"}]: dispatch 2026-03-09T20:31:37.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:37 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-107"}]: dispatch 2026-03-09T20:31:37.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:37 vm09 ceph-mon[54524]: pgmap v797: 300 pgs: 300 active+clean; 455 KiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:38.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:38 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-107"}]': finished 2026-03-09T20:31:38.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:38 vm09 ceph-mon[54524]: osdmap e528: 8 total, 8 up, 8 in 2026-03-09T20:31:38.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:38.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:38 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:38.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:38 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-107"}]': finished 2026-03-09T20:31:38.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:38 vm05 ceph-mon[61345]: osdmap e528: 8 total, 8 up, 8 in 2026-03-09T20:31:38.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:38.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:38 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:38.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:38 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-107"}]': finished 2026-03-09T20:31:38.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:38 vm05 ceph-mon[51870]: osdmap e528: 8 total, 8 up, 8 in 2026-03-09T20:31:38.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1079259819' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:38.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:38 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]: dispatch 2026-03-09T20:31:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:31:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:31:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:31:39.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:39 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:31:39.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:39 vm09 ceph-mon[54524]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]': finished 2026-03-09T20:31:39.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:39 vm09 ceph-mon[54524]: osdmap e529: 8 total, 8 up, 8 in 2026-03-09T20:31:39.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:39 vm09 ceph-mon[54524]: pgmap v800: 300 pgs: 300 active+clean; 455 KiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:39.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:39 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:31:39.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:39 vm05 ceph-mon[61345]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]': finished 2026-03-09T20:31:39.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:39 vm05 ceph-mon[61345]: osdmap e529: 8 total, 8 up, 8 in 2026-03-09T20:31:39.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:39 vm05 ceph-mon[61345]: pgmap v800: 300 pgs: 300 active+clean; 455 KiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:39.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:39 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:31:39.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:39 vm05 ceph-mon[51870]: from='client.50335 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-107", "tierpool": "test-rados-api-vm05-94573-107-cache"}]': finished 2026-03-09T20:31:39.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:39 vm05 ceph-mon[51870]: osdmap e529: 8 total, 8 up, 8 in 2026-03-09T20:31:39.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:39 vm05 ceph-mon[51870]: pgmap v800: 300 pgs: 300 active+clean; 455 KiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:40.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:40 vm09 ceph-mon[54524]: osdmap e530: 8 total, 8 up, 8 in 2026-03-09T20:31:40.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:40 vm05 ceph-mon[51870]: osdmap e530: 8 total, 8 up, 8 in 2026-03-09T20:31:40.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:40 vm05 ceph-mon[61345]: osdmap e530: 8 total, 8 up, 8 in 2026-03-09T20:31:41.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:41 vm09 ceph-mon[54524]: osdmap e531: 8 total, 8 up, 8 in 2026-03-09T20:31:41.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:41.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:41 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:41.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:41.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:41 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:41.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:31:41.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:41 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:31:41.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:41 vm09 ceph-mon[54524]: pgmap v803: 236 pgs: 236 active+clean; 455 KiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[51870]: osdmap e531: 8 total, 8 up, 8 in 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[51870]: pgmap v803: 236 pgs: 236 active+clean; 455 KiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[61345]: osdmap e531: 8 total, 8 up, 8 in 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:31:41.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:41 vm05 ceph-mon[61345]: pgmap v803: 236 pgs: 236 active+clean; 455 KiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:42.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:42 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:31:42.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:42.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:42 vm09 ceph-mon[54524]: osdmap e532: 8 total, 8 up, 8 in 2026-03-09T20:31:42.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:42 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:42.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:42 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:31:42.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:42.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:42 vm05 ceph-mon[51870]: osdmap e532: 8 total, 8 up, 8 in 2026-03-09T20:31:42.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:42 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:42.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:42 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:31:42.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:42.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:42 vm05 ceph-mon[61345]: osdmap e532: 8 total, 8 up, 8 in 2026-03-09T20:31:42.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:42 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:43 vm09 ceph-mon[54524]: osdmap e533: 8 total, 8 up, 8 in 2026-03-09T20:31:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:43 vm09 ceph-mon[54524]: pgmap v806: 236 pgs: 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:43 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-109"}]': finished 2026-03-09T20:31:43.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:43 vm09 ceph-mon[54524]: osdmap e534: 8 total, 8 up, 8 in 2026-03-09T20:31:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:43 vm05 ceph-mon[61345]: osdmap e533: 8 total, 8 up, 8 in 2026-03-09T20:31:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:43 vm05 ceph-mon[61345]: pgmap v806: 236 pgs: 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:43 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-109"}]': finished 2026-03-09T20:31:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:43 vm05 ceph-mon[61345]: osdmap e534: 8 total, 8 up, 8 in 2026-03-09T20:31:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:43 vm05 ceph-mon[51870]: osdmap e533: 8 total, 8 up, 8 in 2026-03-09T20:31:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:43 vm05 ceph-mon[51870]: pgmap v806: 236 pgs: 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:43 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-109"}]': finished 2026-03-09T20:31:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:43 vm05 ceph-mon[51870]: osdmap e534: 8 total, 8 up, 8 in 2026-03-09T20:31:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:45 vm05 ceph-mon[61345]: osdmap e535: 8 total, 8 up, 8 in 2026-03-09T20:31:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:45 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:45 vm05 ceph-mon[61345]: pgmap v809: 276 pgs: 40 unknown, 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:31:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:45 vm05 ceph-mon[51870]: osdmap e535: 8 total, 8 up, 8 in 2026-03-09T20:31:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:45 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:45 vm05 ceph-mon[51870]: pgmap v809: 276 pgs: 40 unknown, 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:31:45.753 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:45 vm09 ceph-mon[54524]: osdmap e535: 8 total, 8 up, 8 in 2026-03-09T20:31:45.753 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:45.753 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:45 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:31:45.753 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:45 vm09 ceph-mon[54524]: pgmap v809: 276 pgs: 40 unknown, 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:45.754 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:31:46.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:31:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:31:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:46 vm05 ceph-mon[61345]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:46 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:46 vm05 ceph-mon[61345]: osdmap e536: 8 total, 8 up, 8 in 2026-03-09T20:31:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:46 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:46 vm05 ceph-mon[51870]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:46 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:46 vm05 ceph-mon[51870]: osdmap e536: 8 total, 8 up, 8 in 2026-03-09T20:31:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:46 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:46 vm09 ceph-mon[54524]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:46 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:31:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:46 vm09 ceph-mon[54524]: osdmap e536: 8 total, 8 up, 8 in 2026-03-09T20:31:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:46 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]': finished 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-109", "overlaypool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[61345]: osdmap e537: 8 total, 8 up, 8 in 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-109", "overlaypool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[61345]: pgmap v812: 276 pgs: 3 creating+activating, 6 creating+peering, 267 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-109", "overlaypool": "test-rados-api-vm05-94573-109-cache"}]': finished 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[61345]: osdmap e538: 8 total, 8 up, 8 in 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]': finished 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-109", "overlaypool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[51870]: osdmap e537: 8 total, 8 up, 8 in 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-109", "overlaypool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[51870]: pgmap v812: 276 pgs: 3 creating+activating, 6 creating+peering, 267 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-109", "overlaypool": "test-rados-api-vm05-94573-109-cache"}]': finished 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:31:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:47 vm05 ceph-mon[51870]: osdmap e538: 8 total, 8 up, 8 in 2026-03-09T20:31:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:47 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]': finished 2026-03-09T20:31:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-109", "overlaypool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:47 vm09 ceph-mon[54524]: osdmap e537: 8 total, 8 up, 8 in 2026-03-09T20:31:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:47 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-109", "overlaypool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:47 vm09 ceph-mon[54524]: pgmap v812: 276 pgs: 3 creating+activating, 6 creating+peering, 267 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:47 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-109", "overlaypool": "test-rados-api-vm05-94573-109-cache"}]': finished 2026-03-09T20:31:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:31:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:47 vm09 ceph-mon[54524]: osdmap e538: 8 total, 8 up, 8 in 2026-03-09T20:31:48.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:48 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:31:48.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:48 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:31:48.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:48 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-109-cache", "mode": "writeback"}]': finished 2026-03-09T20:31:48.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:48 vm05 ceph-mon[61345]: osdmap e539: 8 total, 8 up, 8 in 2026-03-09T20:31:48.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:48 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:31:48.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:48 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:31:48.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:48 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:31:48.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:48 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:31:48.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:48 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-109-cache", "mode": "writeback"}]': finished 2026-03-09T20:31:48.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:48 vm05 ceph-mon[51870]: osdmap e539: 8 total, 8 up, 8 in 2026-03-09T20:31:48.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:48 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:31:48.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:48 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:31:48.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:48 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-109-cache", "mode": "writeback"}]: dispatch 2026-03-09T20:31:48.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:48 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:31:48.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:48 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-109-cache", "mode": "writeback"}]': finished 2026-03-09T20:31:48.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:48 vm09 ceph-mon[54524]: osdmap e539: 8 total, 8 up, 8 in 2026-03-09T20:31:48.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:48 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:31:48.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:48 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:31:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:31:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:31:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:31:49.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:49 vm05 ceph-mon[61345]: pgmap v815: 276 pgs: 3 creating+activating, 6 creating+peering, 267 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:49 vm05 ceph-mon[51870]: pgmap v815: 276 pgs: 3 creating+activating, 6 creating+peering, 267 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:49.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:49 vm09 ceph-mon[54524]: pgmap v815: 276 pgs: 3 creating+activating, 6 creating+peering, 267 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:50.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:50 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:31:50.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:50.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:50 vm09 ceph-mon[54524]: osdmap e540: 8 total, 8 up, 8 in 2026-03-09T20:31:50.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:50 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:50 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:31:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:50 vm05 ceph-mon[61345]: osdmap e540: 8 total, 8 up, 8 in 2026-03-09T20:31:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:50 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:50 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:31:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:50 vm05 ceph-mon[51870]: osdmap e540: 8 total, 8 up, 8 in 2026-03-09T20:31:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:50 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:31:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:51 vm09 ceph-mon[54524]: pgmap v817: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:51 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:31:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:51 vm09 ceph-mon[54524]: osdmap e541: 8 total, 8 up, 8 in 2026-03-09T20:31:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:31:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:51 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:31:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:51 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:51 vm05 ceph-mon[61345]: pgmap v817: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:51 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:31:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:51 vm05 ceph-mon[61345]: osdmap e541: 8 total, 8 up, 8 in 2026-03-09T20:31:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:31:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:51 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:31:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:51 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:51 vm05 ceph-mon[51870]: pgmap v817: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:51 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:31:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:51 vm05 ceph-mon[51870]: osdmap e541: 8 total, 8 up, 8 in 2026-03-09T20:31:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:31:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:51 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:31:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:51 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:52 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:31:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:52 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T20:31:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:52 vm09 ceph-mon[54524]: osdmap e542: 8 total, 8 up, 8 in 2026-03-09T20:31:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T20:31:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:52 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T20:31:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:52 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T20:31:52.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:52 vm09 ceph-mon[54524]: osdmap e543: 8 total, 8 up, 8 in 2026-03-09T20:31:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:31:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T20:31:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[51870]: osdmap e542: 8 total, 8 up, 8 in 2026-03-09T20:31:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T20:31:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T20:31:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T20:31:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[51870]: osdmap e543: 8 total, 8 up, 8 in 2026-03-09T20:31:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:31:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T20:31:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[61345]: osdmap e542: 8 total, 8 up, 8 in 2026-03-09T20:31:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T20:31:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-09T20:31:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-09T20:31:52.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:52 vm05 ceph-mon[61345]: osdmap e543: 8 total, 8 up, 8 in 2026-03-09T20:31:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:53 vm09 ceph-mon[54524]: pgmap v820: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:53 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:53 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:31:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:53 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:31:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:53 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:31:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:53 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[61345]: pgmap v820: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[51870]: pgmap v820: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:31:53.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:53 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:31:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:54 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-109"}]': finished 2026-03-09T20:31:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:54 vm05 ceph-mon[61345]: osdmap e544: 8 total, 8 up, 8 in 2026-03-09T20:31:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:54 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:54 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-109"}]': finished 2026-03-09T20:31:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:54 vm05 ceph-mon[51870]: osdmap e544: 8 total, 8 up, 8 in 2026-03-09T20:31:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:54 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:54 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-109"}]': finished 2026-03-09T20:31:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:54 vm09 ceph-mon[54524]: osdmap e544: 8 total, 8 up, 8 in 2026-03-09T20:31:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:54 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]: dispatch 2026-03-09T20:31:55.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:55 vm05 ceph-mon[61345]: pgmap v823: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:55.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:55 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]': finished 2026-03-09T20:31:55.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:55 vm05 ceph-mon[61345]: osdmap e545: 8 total, 8 up, 8 in 2026-03-09T20:31:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:55 vm05 ceph-mon[51870]: pgmap v823: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:55 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]': finished 2026-03-09T20:31:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:55 vm05 ceph-mon[51870]: osdmap e545: 8 total, 8 up, 8 in 2026-03-09T20:31:55.765 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:55 vm09 ceph-mon[54524]: pgmap v823: 276 pgs: 276 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:31:55.765 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:55 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-109", "tierpool": "test-rados-api-vm05-94573-109-cache"}]': finished 2026-03-09T20:31:55.765 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:55 vm09 ceph-mon[54524]: osdmap e545: 8 total, 8 up, 8 in 2026-03-09T20:31:56.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:31:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:31:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:56 vm05 ceph-mon[61345]: osdmap e546: 8 total, 8 up, 8 in 2026-03-09T20:31:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:56 vm05 ceph-mon[51870]: osdmap e546: 8 total, 8 up, 8 in 2026-03-09T20:31:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:56.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:56 vm09 ceph-mon[54524]: osdmap e546: 8 total, 8 up, 8 in 2026-03-09T20:31:56.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:31:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:57 vm09 ceph-mon[54524]: pgmap v826: 244 pgs: 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:31:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:57 vm09 ceph-mon[54524]: osdmap e547: 8 total, 8 up, 8 in 2026-03-09T20:31:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:57 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:57 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]': finished 2026-03-09T20:31:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:57 vm09 ceph-mon[54524]: osdmap e548: 8 total, 8 up, 8 in 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[61345]: pgmap v826: 244 pgs: 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[61345]: osdmap e547: 8 total, 8 up, 8 in 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]': finished 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[61345]: osdmap e548: 8 total, 8 up, 8 in 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[51870]: pgmap v826: 244 pgs: 244 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[51870]: osdmap e547: 8 total, 8 up, 8 in 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-109"}]': finished 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1524812461' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:57.815 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:57 vm05 ceph-mon[51870]: osdmap e548: 8 total, 8 up, 8 in 2026-03-09T20:31:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:58 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:58 vm09 ceph-mon[54524]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]': finished 2026-03-09T20:31:58.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:58 vm09 ceph-mon[54524]: osdmap e549: 8 total, 8 up, 8 in 2026-03-09T20:31:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:58 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:58 vm05 ceph-mon[61345]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]': finished 2026-03-09T20:31:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:58 vm05 ceph-mon[61345]: osdmap e549: 8 total, 8 up, 8 in 2026-03-09T20:31:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:58 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]: dispatch 2026-03-09T20:31:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:58 vm05 ceph-mon[51870]: from='client.50341 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-109"}]': finished 2026-03-09T20:31:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:58 vm05 ceph-mon[51870]: osdmap e549: 8 total, 8 up, 8 in 2026-03-09T20:31:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:31:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:31:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:31:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:59 vm09 ceph-mon[54524]: pgmap v829: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:59 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:59 vm09 ceph-mon[54524]: osdmap e550: 8 total, 8 up, 8 in 2026-03-09T20:31:59.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:31:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:31:59.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:59 vm05 ceph-mon[61345]: pgmap v829: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:59.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:59 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:59.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:59 vm05 ceph-mon[61345]: osdmap e550: 8 total, 8 up, 8 in 2026-03-09T20:31:59.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:31:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:31:59.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:59 vm05 ceph-mon[51870]: pgmap v829: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:31:59.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:59 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:31:59.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:59 vm05 ceph-mon[51870]: osdmap e550: 8 total, 8 up, 8 in 2026-03-09T20:31:59.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:31:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:32:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:32:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-94573-104"}]': finished 2026-03-09T20:32:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:00 vm09 ceph-mon[54524]: osdmap e551: 8 total, 8 up, 8 in 2026-03-09T20:32:00.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:32:00.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:32:00.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-94573-104"}]': finished 2026-03-09T20:32:00.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:00 vm05 ceph-mon[61345]: osdmap e551: 8 total, 8 up, 8 in 2026-03-09T20:32:00.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:32:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:32:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-94573-104"}]': finished 2026-03-09T20:32:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:00 vm05 ceph-mon[51870]: osdmap e551: 8 total, 8 up, 8 in 2026-03-09T20:32:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-94573-104"}]: dispatch 2026-03-09T20:32:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:01 vm09 ceph-mon[54524]: pgmap v832: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:32:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-94573-104"}]': finished 2026-03-09T20:32:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:01 vm09 ceph-mon[54524]: osdmap e552: 8 total, 8 up, 8 in 2026-03-09T20:32:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:01.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:01.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:01 vm05 ceph-mon[61345]: pgmap v832: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:32:01.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-94573-104"}]': finished 2026-03-09T20:32:01.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:01 vm05 ceph-mon[61345]: osdmap e552: 8 total, 8 up, 8 in 2026-03-09T20:32:01.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:01.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:01.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:01 vm05 ceph-mon[51870]: pgmap v832: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:32:01.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1239324894' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-94573-104"}]': finished 2026-03-09T20:32:01.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:01 vm05 ceph-mon[51870]: osdmap e552: 8 total, 8 up, 8 in 2026-03-09T20:32:01.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:01.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:02.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:32:02.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:32:02.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:02 vm09 ceph-mon[54524]: osdmap e553: 8 total, 8 up, 8 in 2026-03-09T20:32:02.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:02.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:32:02.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:32:02.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:02 vm05 ceph-mon[61345]: osdmap e553: 8 total, 8 up, 8 in 2026-03-09T20:32:02.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:02.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T20:32:02.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-94573-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T20:32:02.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:02 vm05 ceph-mon[51870]: osdmap e553: 8 total, 8 up, 8 in 2026-03-09T20:32:02.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:03.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:03 vm05 ceph-mon[61345]: pgmap v835: 228 pgs: 228 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:32:03.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:03 vm05 ceph-mon[51870]: pgmap v835: 228 pgs: 228 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:32:04.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:03 vm09 ceph-mon[54524]: pgmap v835: 228 pgs: 228 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:32:04.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:04 vm05 ceph-mon[61345]: osdmap e554: 8 total, 8 up, 8 in 2026-03-09T20:32:04.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:04.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:04 vm05 ceph-mon[61345]: osdmap e555: 8 total, 8 up, 8 in 2026-03-09T20:32:04.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:04 vm05 ceph-mon[51870]: osdmap e554: 8 total, 8 up, 8 in 2026-03-09T20:32:04.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:04.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:04 vm05 ceph-mon[51870]: osdmap e555: 8 total, 8 up, 8 in 2026-03-09T20:32:05.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:04 vm09 ceph-mon[54524]: osdmap e554: 8 total, 8 up, 8 in 2026-03-09T20:32:05.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-94573-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:05.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:04 vm09 ceph-mon[54524]: osdmap e555: 8 total, 8 up, 8 in 2026-03-09T20:32:05.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:05 vm05 ceph-mon[61345]: pgmap v838: 228 pgs: 228 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:32:05.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:05 vm05 ceph-mon[51870]: pgmap v838: 228 pgs: 228 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:32:06.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:05 vm09 ceph-mon[54524]: pgmap v838: 228 pgs: 228 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:32:06.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:32:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:32:06.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:06 vm05 ceph-mon[61345]: osdmap e556: 8 total, 8 up, 8 in 2026-03-09T20:32:06.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:06.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:06.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:06 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:06.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:06.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:06 vm05 ceph-mon[61345]: osdmap e557: 8 total, 8 up, 8 in 2026-03-09T20:32:06.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:06 vm05 ceph-mon[51870]: osdmap e556: 8 total, 8 up, 8 in 2026-03-09T20:32:06.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:06.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:06.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:06 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:06.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:06.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:06 vm05 ceph-mon[51870]: osdmap e557: 8 total, 8 up, 8 in 2026-03-09T20:32:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:06 vm09 ceph-mon[54524]: osdmap e556: 8 total, 8 up, 8 in 2026-03-09T20:32:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:06 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:06 vm09 ceph-mon[54524]: osdmap e557: 8 total, 8 up, 8 in 2026-03-09T20:32:07.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:07 vm05 ceph-mon[61345]: pgmap v841: 268 pgs: 28 unknown, 12 creating+peering, 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:07.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:07.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:07.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:07 vm05 ceph-mon[61345]: osdmap e558: 8 total, 8 up, 8 in 2026-03-09T20:32:07.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-112"}]: dispatch 2026-03-09T20:32:07.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:07 vm05 ceph-mon[51870]: pgmap v841: 268 pgs: 28 unknown, 12 creating+peering, 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:07.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:07.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:07.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:07 vm05 ceph-mon[51870]: osdmap e558: 8 total, 8 up, 8 in 2026-03-09T20:32:07.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-112"}]: dispatch 2026-03-09T20:32:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:07 vm09 ceph-mon[54524]: pgmap v841: 268 pgs: 28 unknown, 12 creating+peering, 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:07 vm09 ceph-mon[54524]: osdmap e558: 8 total, 8 up, 8 in 2026-03-09T20:32:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-112"}]: dispatch 2026-03-09T20:32:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:32:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:32:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:32:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:09 vm05 ceph-mon[61345]: pgmap v844: 268 pgs: 28 unknown, 12 creating+peering, 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-112"}]': finished 2026-03-09T20:32:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:09 vm05 ceph-mon[61345]: osdmap e559: 8 total, 8 up, 8 in 2026-03-09T20:32:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:09 vm05 ceph-mon[51870]: pgmap v844: 268 pgs: 28 unknown, 12 creating+peering, 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-112"}]': finished 2026-03-09T20:32:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:09 vm05 ceph-mon[51870]: osdmap e559: 8 total, 8 up, 8 in 2026-03-09T20:32:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:10.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:09 vm09 ceph-mon[54524]: pgmap v844: 268 pgs: 28 unknown, 12 creating+peering, 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:10.024 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-112"}]': finished 2026-03-09T20:32:10.024 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:09 vm09 ceph-mon[54524]: osdmap e559: 8 total, 8 up, 8 in 2026-03-09T20:32:10.024 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:11.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:11.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:10 vm09 ceph-mon[54524]: osdmap e560: 8 total, 8 up, 8 in 2026-03-09T20:32:11.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-112"}]: dispatch 2026-03-09T20:32:11.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:11.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:10 vm05 ceph-mon[61345]: osdmap e560: 8 total, 8 up, 8 in 2026-03-09T20:32:11.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-112"}]: dispatch 2026-03-09T20:32:11.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:11.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:10 vm05 ceph-mon[51870]: osdmap e560: 8 total, 8 up, 8 in 2026-03-09T20:32:11.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-112"}]: dispatch 2026-03-09T20:32:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:11 vm09 ceph-mon[54524]: pgmap v847: 268 pgs: 16 unknown, 9 creating+peering, 243 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-112"}]': finished 2026-03-09T20:32:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:11 vm09 ceph-mon[54524]: osdmap e561: 8 total, 8 up, 8 in 2026-03-09T20:32:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:11 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:11 vm05 ceph-mon[61345]: pgmap v847: 268 pgs: 16 unknown, 9 creating+peering, 243 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-112"}]': finished 2026-03-09T20:32:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:11 vm05 ceph-mon[61345]: osdmap e561: 8 total, 8 up, 8 in 2026-03-09T20:32:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:11 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:11 vm05 ceph-mon[51870]: pgmap v847: 268 pgs: 16 unknown, 9 creating+peering, 243 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-112"}]': finished 2026-03-09T20:32:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:11 vm05 ceph-mon[51870]: osdmap e561: 8 total, 8 up, 8 in 2026-03-09T20:32:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:11 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:13.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:12 vm09 ceph-mon[54524]: osdmap e562: 8 total, 8 up, 8 in 2026-03-09T20:32:13.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:12 vm09 ceph-mon[54524]: osdmap e563: 8 total, 8 up, 8 in 2026-03-09T20:32:13.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:12 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:12 vm05 ceph-mon[61345]: osdmap e562: 8 total, 8 up, 8 in 2026-03-09T20:32:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:12 vm05 ceph-mon[61345]: osdmap e563: 8 total, 8 up, 8 in 2026-03-09T20:32:13.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:12 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:12 vm05 ceph-mon[51870]: osdmap e562: 8 total, 8 up, 8 in 2026-03-09T20:32:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:12 vm05 ceph-mon[51870]: osdmap e563: 8 total, 8 up, 8 in 2026-03-09T20:32:13.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:12 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:13 vm09 ceph-mon[54524]: pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:32:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:13 vm09 ceph-mon[54524]: osdmap e564: 8 total, 8 up, 8 in 2026-03-09T20:32:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:13 vm05 ceph-mon[61345]: pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:32:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:13 vm05 ceph-mon[61345]: osdmap e564: 8 total, 8 up, 8 in 2026-03-09T20:32:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:13 vm05 ceph-mon[51870]: pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:32:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:13 vm05 ceph-mon[51870]: osdmap e564: 8 total, 8 up, 8 in 2026-03-09T20:32:15.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:14 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:15.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:14 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:15.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:14 vm09 ceph-mon[54524]: osdmap e565: 8 total, 8 up, 8 in 2026-03-09T20:32:15.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:14 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-114"}]: dispatch 2026-03-09T20:32:15.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:14 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:15.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:14 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:15.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:14 vm05 ceph-mon[61345]: osdmap e565: 8 total, 8 up, 8 in 2026-03-09T20:32:15.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:14 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-114"}]: dispatch 2026-03-09T20:32:15.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:14 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:15.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:14 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:15.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:14 vm05 ceph-mon[51870]: osdmap e565: 8 total, 8 up, 8 in 2026-03-09T20:32:15.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:14 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-114"}]: dispatch 2026-03-09T20:32:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:15 vm09 ceph-mon[54524]: pgmap v853: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T20:32:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:32:16.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:32:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:32:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:15 vm05 ceph-mon[61345]: pgmap v853: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T20:32:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:32:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:15 vm05 ceph-mon[51870]: pgmap v853: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-09T20:32:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:32:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-114"}]': finished 2026-03-09T20:32:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:16 vm09 ceph-mon[54524]: osdmap e566: 8 total, 8 up, 8 in 2026-03-09T20:32:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-114", "mode": "writeback"}]: dispatch 2026-03-09T20:32:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:16 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:32:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-114", "mode": "writeback"}]': finished 2026-03-09T20:32:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-114"}]': finished 2026-03-09T20:32:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:16 vm05 ceph-mon[61345]: osdmap e566: 8 total, 8 up, 8 in 2026-03-09T20:32:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-114", "mode": "writeback"}]: dispatch 2026-03-09T20:32:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:16 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:32:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-114", "mode": "writeback"}]': finished 2026-03-09T20:32:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-114"}]': finished 2026-03-09T20:32:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:16 vm05 ceph-mon[51870]: osdmap e566: 8 total, 8 up, 8 in 2026-03-09T20:32:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-114", "mode": "writeback"}]: dispatch 2026-03-09T20:32:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:16 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:32:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-114", "mode": "writeback"}]': finished 2026-03-09T20:32:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:17 vm09 ceph-mon[54524]: pgmap v856: 268 pgs: 268 active+clean; 455 KiB data, 1018 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:32:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:17 vm09 ceph-mon[54524]: osdmap e567: 8 total, 8 up, 8 in 2026-03-09T20:32:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:17 vm05 ceph-mon[61345]: pgmap v856: 268 pgs: 268 active+clean; 455 KiB data, 1018 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:32:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:17 vm05 ceph-mon[61345]: osdmap e567: 8 total, 8 up, 8 in 2026-03-09T20:32:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:17 vm05 ceph-mon[51870]: pgmap v856: 268 pgs: 268 active+clean; 455 KiB data, 1018 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:32:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:17 vm05 ceph-mon[51870]: osdmap e567: 8 total, 8 up, 8 in 2026-03-09T20:32:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:18.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:18.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:18 vm05 ceph-mon[61345]: osdmap e568: 8 total, 8 up, 8 in 2026-03-09T20:32:18.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-114"}]: dispatch 2026-03-09T20:32:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:32:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:32:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:32:18.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:18.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:18 vm05 ceph-mon[51870]: osdmap e568: 8 total, 8 up, 8 in 2026-03-09T20:32:18.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-114"}]: dispatch 2026-03-09T20:32:19.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:19.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:18 vm09 ceph-mon[54524]: osdmap e568: 8 total, 8 up, 8 in 2026-03-09T20:32:19.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-114"}]: dispatch 2026-03-09T20:32:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:19 vm05 ceph-mon[51870]: pgmap v859: 268 pgs: 268 active+clean; 455 KiB data, 1018 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:32:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:19 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:32:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-114"}]': finished 2026-03-09T20:32:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:19 vm05 ceph-mon[51870]: osdmap e569: 8 total, 8 up, 8 in 2026-03-09T20:32:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:19 vm05 ceph-mon[61345]: pgmap v859: 268 pgs: 268 active+clean; 455 KiB data, 1018 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:32:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:19 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:32:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-114"}]': finished 2026-03-09T20:32:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:19 vm05 ceph-mon[61345]: osdmap e569: 8 total, 8 up, 8 in 2026-03-09T20:32:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:19 vm09 ceph-mon[54524]: pgmap v859: 268 pgs: 268 active+clean; 455 KiB data, 1018 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:32:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:19 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:32:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-114"}]': finished 2026-03-09T20:32:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:19 vm09 ceph-mon[54524]: osdmap e569: 8 total, 8 up, 8 in 2026-03-09T20:32:21.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:20 vm05 ceph-mon[61345]: osdmap e570: 8 total, 8 up, 8 in 2026-03-09T20:32:21.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:20 vm05 ceph-mon[61345]: osdmap e571: 8 total, 8 up, 8 in 2026-03-09T20:32:21.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:20 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:21.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:20 vm05 ceph-mon[51870]: osdmap e570: 8 total, 8 up, 8 in 2026-03-09T20:32:21.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:20 vm05 ceph-mon[51870]: osdmap e571: 8 total, 8 up, 8 in 2026-03-09T20:32:21.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:20 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:21.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:20 vm09 ceph-mon[54524]: osdmap e570: 8 total, 8 up, 8 in 2026-03-09T20:32:21.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:20 vm09 ceph-mon[54524]: osdmap e571: 8 total, 8 up, 8 in 2026-03-09T20:32:21.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:20 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:21 vm05 ceph-mon[61345]: pgmap v862: 236 pgs: 236 active+clean; 455 KiB data, 1018 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:21 vm05 ceph-mon[61345]: osdmap e572: 8 total, 8 up, 8 in 2026-03-09T20:32:22.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:21 vm05 ceph-mon[51870]: pgmap v862: 236 pgs: 236 active+clean; 455 KiB data, 1018 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:22.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:22.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:21 vm05 ceph-mon[51870]: osdmap e572: 8 total, 8 up, 8 in 2026-03-09T20:32:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:21 vm09 ceph-mon[54524]: pgmap v862: 236 pgs: 236 active+clean; 455 KiB data, 1018 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:21 vm09 ceph-mon[54524]: osdmap e572: 8 total, 8 up, 8 in 2026-03-09T20:32:24.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:23 vm05 ceph-mon[61345]: pgmap v865: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:24.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:23 vm05 ceph-mon[61345]: osdmap e573: 8 total, 8 up, 8 in 2026-03-09T20:32:24.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:24.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:23 vm05 ceph-mon[51870]: pgmap v865: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:24.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:23 vm05 ceph-mon[51870]: osdmap e573: 8 total, 8 up, 8 in 2026-03-09T20:32:24.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:23 vm09 ceph-mon[54524]: pgmap v865: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:23 vm09 ceph-mon[54524]: osdmap e573: 8 total, 8 up, 8 in 2026-03-09T20:32:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:25.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:24 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:25.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:24 vm05 ceph-mon[61345]: osdmap e574: 8 total, 8 up, 8 in 2026-03-09T20:32:25.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:24 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-116"}]: dispatch 2026-03-09T20:32:25.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:24 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:25.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:24 vm05 ceph-mon[51870]: osdmap e574: 8 total, 8 up, 8 in 2026-03-09T20:32:25.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:24 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-116"}]: dispatch 2026-03-09T20:32:25.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:24 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:25.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:24 vm09 ceph-mon[54524]: osdmap e574: 8 total, 8 up, 8 in 2026-03-09T20:32:25.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:24 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-116"}]: dispatch 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[61345]: pgmap v868: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-116"}]': finished 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[61345]: osdmap e575: 8 total, 8 up, 8 in 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-116", "mode": "writeback"}]: dispatch 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-116", "mode": "writeback"}]': finished 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[61345]: osdmap e576: 8 total, 8 up, 8 in 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[51870]: pgmap v868: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-116"}]': finished 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[51870]: osdmap e575: 8 total, 8 up, 8 in 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-116", "mode": "writeback"}]: dispatch 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-116", "mode": "writeback"}]': finished 2026-03-09T20:32:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:25 vm05 ceph-mon[51870]: osdmap e576: 8 total, 8 up, 8 in 2026-03-09T20:32:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:25 vm09 ceph-mon[54524]: pgmap v868: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1019 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-09T20:32:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-116"}]': finished 2026-03-09T20:32:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:25 vm09 ceph-mon[54524]: osdmap e575: 8 total, 8 up, 8 in 2026-03-09T20:32:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-116", "mode": "writeback"}]: dispatch 2026-03-09T20:32:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:25 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:32:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-116", "mode": "writeback"}]': finished 2026-03-09T20:32:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:25 vm09 ceph-mon[54524]: osdmap e576: 8 total, 8 up, 8 in 2026-03-09T20:32:26.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:32:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:32:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-09T20:32:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:26 vm05 ceph-mon[61345]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-09T20:32:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:26 vm05 ceph-mon[61345]: 318.5 scrub starts 2026-03-09T20:32:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:26 vm05 ceph-mon[61345]: 318.5 scrub ok 2026-03-09T20:32:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-09T20:32:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:26 vm05 ceph-mon[51870]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-09T20:32:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:26 vm05 ceph-mon[51870]: 318.5 scrub starts 2026-03-09T20:32:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:26 vm05 ceph-mon[51870]: 318.5 scrub ok 2026-03-09T20:32:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-09T20:32:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:26 vm09 ceph-mon[54524]: from='mon.0 v1:192.168.123.105:0/4038530932' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-09T20:32:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:26 vm09 ceph-mon[54524]: 318.5 scrub starts 2026-03-09T20:32:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:26 vm09 ceph-mon[54524]: 318.5 scrub ok 2026-03-09T20:32:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:27 vm05 ceph-mon[61345]: pgmap v871: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:32:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:27 vm05 ceph-mon[51870]: pgmap v871: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:32:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:27 vm09 ceph-mon[54524]: pgmap v871: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:32:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:32:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:32:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:32:30.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:29 vm05 ceph-mon[61345]: pgmap v872: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 919 B/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-09T20:32:30.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:29 vm05 ceph-mon[51870]: pgmap v872: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 919 B/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-09T20:32:30.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:29 vm09 ceph-mon[54524]: pgmap v872: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 919 B/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-09T20:32:31.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:30 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:31.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:32:31.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:30 vm09 ceph-mon[54524]: pgmap v873: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-09T20:32:31.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:30 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:31.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:32:31.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:30 vm05 ceph-mon[61345]: pgmap v873: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-09T20:32:31.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:30 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:31.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:32:31.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:30 vm05 ceph-mon[51870]: pgmap v873: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-09T20:32:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:33 vm09 ceph-mon[54524]: pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 895 B/s wr, 2 op/s 2026-03-09T20:32:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:33 vm05 ceph-mon[61345]: pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 895 B/s wr, 2 op/s 2026-03-09T20:32:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:33 vm05 ceph-mon[51870]: pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 895 B/s wr, 2 op/s 2026-03-09T20:32:35.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:35 vm09 ceph-mon[54524]: pgmap v875: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 751 B/s wr, 2 op/s 2026-03-09T20:32:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:35 vm05 ceph-mon[61345]: pgmap v875: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 751 B/s wr, 2 op/s 2026-03-09T20:32:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:35 vm05 ceph-mon[51870]: pgmap v875: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 751 B/s wr, 2 op/s 2026-03-09T20:32:36.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:32:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:32:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:37 vm09 ceph-mon[54524]: pgmap v876: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T20:32:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:37 vm05 ceph-mon[61345]: pgmap v876: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T20:32:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:37 vm05 ceph-mon[51870]: pgmap v876: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T20:32:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:32:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:32:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:32:39.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:39 vm09 ceph-mon[54524]: pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T20:32:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:39 vm05 ceph-mon[61345]: pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T20:32:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:39 vm05 ceph-mon[51870]: pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T20:32:41.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:41 vm09 ceph-mon[54524]: pgmap v878: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T20:32:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:41 vm05 ceph-mon[61345]: pgmap v878: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T20:32:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:41 vm05 ceph-mon[51870]: pgmap v878: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-09T20:32:42.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:42 vm05 ceph-mon[61345]: osdmap e577: 8 total, 8 up, 8 in 2026-03-09T20:32:42.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:42.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:42 vm05 ceph-mon[51870]: osdmap e577: 8 total, 8 up, 8 in 2026-03-09T20:32:42.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:43.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:42 vm09 ceph-mon[54524]: osdmap e577: 8 total, 8 up, 8 in 2026-03-09T20:32:43.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:43 vm05 ceph-mon[61345]: pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:32:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:43 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:43 vm05 ceph-mon[61345]: osdmap e578: 8 total, 8 up, 8 in 2026-03-09T20:32:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:43 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-116"}]: dispatch 2026-03-09T20:32:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:43 vm05 ceph-mon[51870]: pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:32:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:43 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:43.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:43 vm05 ceph-mon[51870]: osdmap e578: 8 total, 8 up, 8 in 2026-03-09T20:32:43.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:43 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-116"}]: dispatch 2026-03-09T20:32:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:43 vm09 ceph-mon[54524]: pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:32:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:43 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:43 vm09 ceph-mon[54524]: osdmap e578: 8 total, 8 up, 8 in 2026-03-09T20:32:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:43 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-116"}]: dispatch 2026-03-09T20:32:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:44 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:32:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-116"}]': finished 2026-03-09T20:32:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:44 vm05 ceph-mon[61345]: osdmap e579: 8 total, 8 up, 8 in 2026-03-09T20:32:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:44 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:32:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-116"}]': finished 2026-03-09T20:32:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:44 vm05 ceph-mon[51870]: osdmap e579: 8 total, 8 up, 8 in 2026-03-09T20:32:45.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:44 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:32:45.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-116"}]': finished 2026-03-09T20:32:45.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:44 vm09 ceph-mon[54524]: osdmap e579: 8 total, 8 up, 8 in 2026-03-09T20:32:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:45 vm05 ceph-mon[61345]: pgmap v883: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:32:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:45 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:45 vm05 ceph-mon[61345]: osdmap e580: 8 total, 8 up, 8 in 2026-03-09T20:32:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:45 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:32:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:45 vm05 ceph-mon[51870]: pgmap v883: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:32:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:45 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:45 vm05 ceph-mon[51870]: osdmap e580: 8 total, 8 up, 8 in 2026-03-09T20:32:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:45 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:32:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:45 vm09 ceph-mon[54524]: pgmap v883: 268 pgs: 268 active+clean; 455 KiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:32:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:45 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:45 vm09 ceph-mon[54524]: osdmap e580: 8 total, 8 up, 8 in 2026-03-09T20:32:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:45 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:32:46.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:32:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:32:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:46 vm05 ceph-mon[61345]: osdmap e581: 8 total, 8 up, 8 in 2026-03-09T20:32:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:46 vm05 ceph-mon[51870]: osdmap e581: 8 total, 8 up, 8 in 2026-03-09T20:32:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:46 vm09 ceph-mon[54524]: osdmap e581: 8 total, 8 up, 8 in 2026-03-09T20:32:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:47 vm05 ceph-mon[61345]: pgmap v886: 268 pgs: 10 creating+peering, 22 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T20:32:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:47 vm05 ceph-mon[61345]: osdmap e582: 8 total, 8 up, 8 in 2026-03-09T20:32:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:47 vm05 ceph-mon[61345]: osdmap e583: 8 total, 8 up, 8 in 2026-03-09T20:32:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:47 vm05 ceph-mon[51870]: pgmap v886: 268 pgs: 10 creating+peering, 22 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T20:32:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:47 vm05 ceph-mon[51870]: osdmap e582: 8 total, 8 up, 8 in 2026-03-09T20:32:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:47 vm05 ceph-mon[51870]: osdmap e583: 8 total, 8 up, 8 in 2026-03-09T20:32:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:47 vm09 ceph-mon[54524]: pgmap v886: 268 pgs: 10 creating+peering, 22 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T20:32:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:47 vm09 ceph-mon[54524]: osdmap e582: 8 total, 8 up, 8 in 2026-03-09T20:32:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:47 vm09 ceph-mon[54524]: osdmap e583: 8 total, 8 up, 8 in 2026-03-09T20:32:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:32:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:32:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:32:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:49 vm05 ceph-mon[61345]: pgmap v889: 268 pgs: 10 creating+peering, 22 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T20:32:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:49 vm05 ceph-mon[61345]: osdmap e584: 8 total, 8 up, 8 in 2026-03-09T20:32:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-118"}]: dispatch 2026-03-09T20:32:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:49 vm05 ceph-mon[51870]: pgmap v889: 268 pgs: 10 creating+peering, 22 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T20:32:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:49 vm05 ceph-mon[51870]: osdmap e584: 8 total, 8 up, 8 in 2026-03-09T20:32:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-118"}]: dispatch 2026-03-09T20:32:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:49 vm09 ceph-mon[54524]: pgmap v889: 268 pgs: 10 creating+peering, 22 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-09T20:32:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:49 vm09 ceph-mon[54524]: osdmap e584: 8 total, 8 up, 8 in 2026-03-09T20:32:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-118"}]: dispatch 2026-03-09T20:32:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-118"}]': finished 2026-03-09T20:32:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:50 vm05 ceph-mon[61345]: osdmap e585: 8 total, 8 up, 8 in 2026-03-09T20:32:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-118", "mode": "writeback"}]: dispatch 2026-03-09T20:32:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:50 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:32:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-118", "mode": "writeback"}]': finished 2026-03-09T20:32:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:50 vm05 ceph-mon[61345]: osdmap e586: 8 total, 8 up, 8 in 2026-03-09T20:32:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-118"}]': finished 2026-03-09T20:32:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:50 vm05 ceph-mon[51870]: osdmap e585: 8 total, 8 up, 8 in 2026-03-09T20:32:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-118", "mode": "writeback"}]: dispatch 2026-03-09T20:32:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:50 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:32:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-118", "mode": "writeback"}]': finished 2026-03-09T20:32:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:50 vm05 ceph-mon[51870]: osdmap e586: 8 total, 8 up, 8 in 2026-03-09T20:32:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-118"}]': finished 2026-03-09T20:32:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:50 vm09 ceph-mon[54524]: osdmap e585: 8 total, 8 up, 8 in 2026-03-09T20:32:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-118", "mode": "writeback"}]: dispatch 2026-03-09T20:32:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:50 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:32:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-118", "mode": "writeback"}]': finished 2026-03-09T20:32:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:50 vm09 ceph-mon[54524]: osdmap e586: 8 total, 8 up, 8 in 2026-03-09T20:32:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:51 vm05 ceph-mon[61345]: pgmap v892: 268 pgs: 10 creating+peering, 8 unknown, 2 active+clean+snaptrim, 248 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:51 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:51 vm05 ceph-mon[61345]: osdmap e587: 8 total, 8 up, 8 in 2026-03-09T20:32:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:51 vm05 ceph-mon[51870]: pgmap v892: 268 pgs: 10 creating+peering, 8 unknown, 2 active+clean+snaptrim, 248 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:51 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:51 vm05 ceph-mon[51870]: osdmap e587: 8 total, 8 up, 8 in 2026-03-09T20:32:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:51 vm09 ceph-mon[54524]: pgmap v892: 268 pgs: 10 creating+peering, 8 unknown, 2 active+clean+snaptrim, 248 active+clean; 455 KiB data, 1004 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:32:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:51 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:51 vm09 ceph-mon[54524]: osdmap e587: 8 total, 8 up, 8 in 2026-03-09T20:32:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:32:53.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:53 vm05 ceph-mon[51870]: pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:32:53.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:53.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:53 vm05 ceph-mon[51870]: osdmap e588: 8 total, 8 up, 8 in 2026-03-09T20:32:53.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-118"}]: dispatch 2026-03-09T20:32:53.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:53 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:32:53.909 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:53 vm05 ceph-mon[61345]: pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:32:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:53 vm05 ceph-mon[61345]: osdmap e588: 8 total, 8 up, 8 in 2026-03-09T20:32:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-118"}]: dispatch 2026-03-09T20:32:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:53 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:32:53.915 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:53 vm09 ceph-mon[54524]: pgmap v895: 268 pgs: 268 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:32:53.915 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:32:53.915 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:53 vm09 ceph-mon[54524]: osdmap e588: 8 total, 8 up, 8 in 2026-03-09T20:32:53.915 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-118"}]: dispatch 2026-03-09T20:32:53.915 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:53 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:32:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:32:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-118"}]': finished 2026-03-09T20:32:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[61345]: osdmap e589: 8 total, 8 up, 8 in 2026-03-09T20:32:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:32:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:32:54.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:32:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-118"}]': finished 2026-03-09T20:32:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[51870]: osdmap e589: 8 total, 8 up, 8 in 2026-03-09T20:32:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:32:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:32:54.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:54 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:54 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:32:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-118"}]': finished 2026-03-09T20:32:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:54 vm09 ceph-mon[54524]: osdmap e589: 8 total, 8 up, 8 in 2026-03-09T20:32:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:54 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:54 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:54 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:54 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:54 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:32:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:54 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:32:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:54 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:32:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:55 vm05 ceph-mon[51870]: pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-09T20:32:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:55 vm05 ceph-mon[51870]: osdmap e590: 8 total, 8 up, 8 in 2026-03-09T20:32:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:55 vm05 ceph-mon[61345]: pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-09T20:32:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:55 vm05 ceph-mon[61345]: osdmap e590: 8 total, 8 up, 8 in 2026-03-09T20:32:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:55 vm09 ceph-mon[54524]: pgmap v898: 268 pgs: 268 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-09T20:32:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:55 vm09 ceph-mon[54524]: osdmap e590: 8 total, 8 up, 8 in 2026-03-09T20:32:56.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:32:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[61345]: osdmap e591: 8 total, 8 up, 8 in 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[61345]: osdmap e592: 8 total, 8 up, 8 in 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[51870]: osdmap e591: 8 total, 8 up, 8 in 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[51870]: osdmap e592: 8 total, 8 up, 8 in 2026-03-09T20:32:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:56 vm09 ceph-mon[54524]: osdmap e591: 8 total, 8 up, 8 in 2026-03-09T20:32:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:32:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:32:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:56 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:32:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:32:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:56 vm09 ceph-mon[54524]: osdmap e592: 8 total, 8 up, 8 in 2026-03-09T20:32:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:32:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:57 vm09 ceph-mon[54524]: pgmap v902: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:32:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:57 vm09 ceph-mon[54524]: osdmap e593: 8 total, 8 up, 8 in 2026-03-09T20:32:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-120"}]: dispatch 2026-03-09T20:32:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:57 vm05 ceph-mon[61345]: pgmap v902: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:32:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:57 vm05 ceph-mon[61345]: osdmap e593: 8 total, 8 up, 8 in 2026-03-09T20:32:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-120"}]: dispatch 2026-03-09T20:32:58.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:57 vm05 ceph-mon[51870]: pgmap v902: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:32:58.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:32:58.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:57 vm05 ceph-mon[51870]: osdmap e593: 8 total, 8 up, 8 in 2026-03-09T20:32:58.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-120"}]: dispatch 2026-03-09T20:32:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:32:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:32:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:32:59.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-120"}]': finished 2026-03-09T20:32:59.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:59 vm09 ceph-mon[54524]: osdmap e594: 8 total, 8 up, 8 in 2026-03-09T20:32:59.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-120", "mode": "writeback"}]: dispatch 2026-03-09T20:32:59.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:32:59 vm09 ceph-mon[54524]: pgmap v905: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:32:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-120"}]': finished 2026-03-09T20:32:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:59 vm05 ceph-mon[61345]: osdmap e594: 8 total, 8 up, 8 in 2026-03-09T20:32:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-120", "mode": "writeback"}]: dispatch 2026-03-09T20:32:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:32:59 vm05 ceph-mon[61345]: pgmap v905: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:32:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-120"}]': finished 2026-03-09T20:32:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:59 vm05 ceph-mon[51870]: osdmap e594: 8 total, 8 up, 8 in 2026-03-09T20:32:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-120", "mode": "writeback"}]: dispatch 2026-03-09T20:32:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:32:59 vm05 ceph-mon[51870]: pgmap v905: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:33:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:00 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-120", "mode": "writeback"}]': finished 2026-03-09T20:33:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:00 vm09 ceph-mon[54524]: osdmap e595: 8 total, 8 up, 8 in 2026-03-09T20:33:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:00 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:33:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:33:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:00 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-120", "mode": "writeback"}]': finished 2026-03-09T20:33:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:00 vm05 ceph-mon[61345]: osdmap e595: 8 total, 8 up, 8 in 2026-03-09T20:33:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:00 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:33:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:33:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:00 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-120", "mode": "writeback"}]': finished 2026-03-09T20:33:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:00 vm05 ceph-mon[51870]: osdmap e595: 8 total, 8 up, 8 in 2026-03-09T20:33:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:00 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:33:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:33:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:01 vm09 ceph-mon[54524]: osdmap e596: 8 total, 8 up, 8 in 2026-03-09T20:33:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-120"}]: dispatch 2026-03-09T20:33:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:01 vm09 ceph-mon[54524]: pgmap v908: 268 pgs: 18 unknown, 1 active+clean+snaptrim, 249 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:33:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:01 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:01 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:01 vm05 ceph-mon[61345]: osdmap e596: 8 total, 8 up, 8 in 2026-03-09T20:33:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-120"}]: dispatch 2026-03-09T20:33:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:01 vm05 ceph-mon[61345]: pgmap v908: 268 pgs: 18 unknown, 1 active+clean+snaptrim, 249 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:33:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:01 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:01 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:01.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:01.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:01 vm05 ceph-mon[51870]: osdmap e596: 8 total, 8 up, 8 in 2026-03-09T20:33:01.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-120"}]: dispatch 2026-03-09T20:33:01.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:01 vm05 ceph-mon[51870]: pgmap v908: 268 pgs: 18 unknown, 1 active+clean+snaptrim, 249 active+clean; 455 KiB data, 1005 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:33:01.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:01 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:01.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:01 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-120"}]': finished 2026-03-09T20:33:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:02 vm09 ceph-mon[54524]: osdmap e597: 8 total, 8 up, 8 in 2026-03-09T20:33:02.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-120"}]': finished 2026-03-09T20:33:02.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:02 vm05 ceph-mon[61345]: osdmap e597: 8 total, 8 up, 8 in 2026-03-09T20:33:02.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-120"}]': finished 2026-03-09T20:33:02.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:02 vm05 ceph-mon[51870]: osdmap e597: 8 total, 8 up, 8 in 2026-03-09T20:33:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:03 vm09 ceph-mon[54524]: osdmap e598: 8 total, 8 up, 8 in 2026-03-09T20:33:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:03 vm09 ceph-mon[54524]: pgmap v911: 236 pgs: 236 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:33:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:03 vm09 ceph-mon[54524]: osdmap e599: 8 total, 8 up, 8 in 2026-03-09T20:33:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:03.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:03 vm05 ceph-mon[61345]: osdmap e598: 8 total, 8 up, 8 in 2026-03-09T20:33:03.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:03 vm05 ceph-mon[61345]: pgmap v911: 236 pgs: 236 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:33:03.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:03 vm05 ceph-mon[61345]: osdmap e599: 8 total, 8 up, 8 in 2026-03-09T20:33:03.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:03.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:03 vm05 ceph-mon[51870]: osdmap e598: 8 total, 8 up, 8 in 2026-03-09T20:33:03.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:03 vm05 ceph-mon[51870]: pgmap v911: 236 pgs: 236 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:33:03.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:03 vm05 ceph-mon[51870]: osdmap e599: 8 total, 8 up, 8 in 2026-03-09T20:33:03.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:05 vm09 ceph-mon[54524]: osdmap e600: 8 total, 8 up, 8 in 2026-03-09T20:33:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:05 vm09 ceph-mon[54524]: pgmap v914: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:05 vm05 ceph-mon[61345]: osdmap e600: 8 total, 8 up, 8 in 2026-03-09T20:33:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:05.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:05 vm05 ceph-mon[61345]: pgmap v914: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:05 vm05 ceph-mon[51870]: osdmap e600: 8 total, 8 up, 8 in 2026-03-09T20:33:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:05.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:05 vm05 ceph-mon[51870]: pgmap v914: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1006 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:06.254 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:33:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:33:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:06 vm09 ceph-mon[54524]: osdmap e601: 8 total, 8 up, 8 in 2026-03-09T20:33:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-122"}]: dispatch 2026-03-09T20:33:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-122"}]': finished 2026-03-09T20:33:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:06 vm09 ceph-mon[54524]: osdmap e602: 8 total, 8 up, 8 in 2026-03-09T20:33:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-122", "mode": "writeback"}]: dispatch 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[61345]: osdmap e601: 8 total, 8 up, 8 in 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-122"}]: dispatch 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-122"}]': finished 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[61345]: osdmap e602: 8 total, 8 up, 8 in 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-122", "mode": "writeback"}]: dispatch 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[51870]: osdmap e601: 8 total, 8 up, 8 in 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-122"}]: dispatch 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-122"}]': finished 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[51870]: osdmap e602: 8 total, 8 up, 8 in 2026-03-09T20:33:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-122", "mode": "writeback"}]: dispatch 2026-03-09T20:33:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:07 vm09 ceph-mon[54524]: pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:07 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-122", "mode": "writeback"}]': finished 2026-03-09T20:33:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:07 vm09 ceph-mon[54524]: osdmap e603: 8 total, 8 up, 8 in 2026-03-09T20:33:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:07 vm05 ceph-mon[61345]: pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:07 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-122", "mode": "writeback"}]': finished 2026-03-09T20:33:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:07 vm05 ceph-mon[61345]: osdmap e603: 8 total, 8 up, 8 in 2026-03-09T20:33:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:07 vm05 ceph-mon[51870]: pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:07 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-122", "mode": "writeback"}]': finished 2026-03-09T20:33:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:07 vm05 ceph-mon[51870]: osdmap e603: 8 total, 8 up, 8 in 2026-03-09T20:33:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:08.561 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:08.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:08.898 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:33:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:33:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:33:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:09 vm05 ceph-mon[61345]: osdmap e604: 8 total, 8 up, 8 in 2026-03-09T20:33:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-122"}]: dispatch 2026-03-09T20:33:09.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:09 vm05 ceph-mon[61345]: pgmap v920: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:09.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:09.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:09 vm05 ceph-mon[51870]: osdmap e604: 8 total, 8 up, 8 in 2026-03-09T20:33:09.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-122"}]: dispatch 2026-03-09T20:33:09.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:09 vm05 ceph-mon[51870]: pgmap v920: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:09.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:09.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:09 vm09 ceph-mon[54524]: osdmap e604: 8 total, 8 up, 8 in 2026-03-09T20:33:09.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-122"}]: dispatch 2026-03-09T20:33:09.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:09 vm09 ceph-mon[54524]: pgmap v920: 268 pgs: 268 active+clean; 455 KiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:10 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:10 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-122"}]': finished 2026-03-09T20:33:10.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:10 vm05 ceph-mon[51870]: osdmap e605: 8 total, 8 up, 8 in 2026-03-09T20:33:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:10 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:10 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-122"}]': finished 2026-03-09T20:33:10.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:10 vm05 ceph-mon[61345]: osdmap e605: 8 total, 8 up, 8 in 2026-03-09T20:33:10.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:10 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:10.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:10 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-122"}]': finished 2026-03-09T20:33:10.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:10 vm09 ceph-mon[54524]: osdmap e605: 8 total, 8 up, 8 in 2026-03-09T20:33:11.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:11 vm05 ceph-mon[51870]: osdmap e606: 8 total, 8 up, 8 in 2026-03-09T20:33:11.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:11 vm05 ceph-mon[51870]: pgmap v923: 236 pgs: 236 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:11.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:11 vm05 ceph-mon[51870]: osdmap e607: 8 total, 8 up, 8 in 2026-03-09T20:33:11.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:11.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:11 vm05 ceph-mon[61345]: osdmap e606: 8 total, 8 up, 8 in 2026-03-09T20:33:11.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:11 vm05 ceph-mon[61345]: pgmap v923: 236 pgs: 236 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:11.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:11 vm05 ceph-mon[61345]: osdmap e607: 8 total, 8 up, 8 in 2026-03-09T20:33:11.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:11.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:11 vm09 ceph-mon[54524]: osdmap e606: 8 total, 8 up, 8 in 2026-03-09T20:33:11.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:11 vm09 ceph-mon[54524]: pgmap v923: 236 pgs: 236 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:11.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:11 vm09 ceph-mon[54524]: osdmap e607: 8 total, 8 up, 8 in 2026-03-09T20:33:11.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:13.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:13.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:13 vm05 ceph-mon[51870]: osdmap e608: 8 total, 8 up, 8 in 2026-03-09T20:33:13.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:13 vm05 ceph-mon[51870]: pgmap v926: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:13.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:13.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:13 vm05 ceph-mon[61345]: osdmap e608: 8 total, 8 up, 8 in 2026-03-09T20:33:13.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:13 vm05 ceph-mon[61345]: pgmap v926: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:13.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:13.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:13 vm09 ceph-mon[54524]: osdmap e608: 8 total, 8 up, 8 in 2026-03-09T20:33:13.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:13 vm09 ceph-mon[54524]: pgmap v926: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:14.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:14 vm05 ceph-mon[51870]: osdmap e609: 8 total, 8 up, 8 in 2026-03-09T20:33:14.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:14 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:14.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:14 vm05 ceph-mon[61345]: osdmap e609: 8 total, 8 up, 8 in 2026-03-09T20:33:14.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:14 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:14.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:14 vm09 ceph-mon[54524]: osdmap e609: 8 total, 8 up, 8 in 2026-03-09T20:33:14.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:14 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[51870]: osdmap e610: 8 total, 8 up, 8 in 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-124"}]: dispatch 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[51870]: pgmap v929: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-124"}]': finished 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[51870]: osdmap e611: 8 total, 8 up, 8 in 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[61345]: osdmap e610: 8 total, 8 up, 8 in 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-124"}]: dispatch 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[61345]: pgmap v929: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-124"}]': finished 2026-03-09T20:33:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:15 vm05 ceph-mon[61345]: osdmap e611: 8 total, 8 up, 8 in 2026-03-09T20:33:15.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:15.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:15 vm09 ceph-mon[54524]: osdmap e610: 8 total, 8 up, 8 in 2026-03-09T20:33:15.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-124"}]: dispatch 2026-03-09T20:33:15.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:15 vm09 ceph-mon[54524]: pgmap v929: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1011 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-09T20:33:15.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:15 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:33:15.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:33:15.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-124"}]': finished 2026-03-09T20:33:15.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:15 vm09 ceph-mon[54524]: osdmap e611: 8 total, 8 up, 8 in 2026-03-09T20:33:16.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:33:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:33:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-124", "mode": "writeback"}]: dispatch 2026-03-09T20:33:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:16 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-124", "mode": "writeback"}]': finished 2026-03-09T20:33:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:16 vm05 ceph-mon[51870]: osdmap e612: 8 total, 8 up, 8 in 2026-03-09T20:33:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-124", "mode": "writeback"}]: dispatch 2026-03-09T20:33:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:16 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-124", "mode": "writeback"}]': finished 2026-03-09T20:33:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:16 vm05 ceph-mon[61345]: osdmap e612: 8 total, 8 up, 8 in 2026-03-09T20:33:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-124", "mode": "writeback"}]: dispatch 2026-03-09T20:33:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:16 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-124", "mode": "writeback"}]': finished 2026-03-09T20:33:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:16 vm09 ceph-mon[54524]: osdmap e612: 8 total, 8 up, 8 in 2026-03-09T20:33:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:17 vm05 ceph-mon[51870]: pgmap v932: 268 pgs: 268 active+clean; 455 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:33:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:17 vm05 ceph-mon[61345]: pgmap v932: 268 pgs: 268 active+clean; 455 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:33:17.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:17 vm09 ceph-mon[54524]: pgmap v932: 268 pgs: 268 active+clean; 455 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:33:18.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:18 vm05 ceph-mon[51870]: osdmap e613: 8 total, 8 up, 8 in 2026-03-09T20:33:18.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:18.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:18 vm05 ceph-mon[61345]: osdmap e613: 8 total, 8 up, 8 in 2026-03-09T20:33:18.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:18.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:18 vm09 ceph-mon[54524]: osdmap e613: 8 total, 8 up, 8 in 2026-03-09T20:33:18.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:33:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:33:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[51870]: osdmap e614: 8 total, 8 up, 8 in 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-124"}]: dispatch 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[51870]: pgmap v935: 268 pgs: 268 active+clean; 455 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-124"}]': finished 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[51870]: osdmap e615: 8 total, 8 up, 8 in 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[61345]: osdmap e614: 8 total, 8 up, 8 in 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-124"}]: dispatch 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[61345]: pgmap v935: 268 pgs: 268 active+clean; 455 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-124"}]': finished 2026-03-09T20:33:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:19 vm05 ceph-mon[61345]: osdmap e615: 8 total, 8 up, 8 in 2026-03-09T20:33:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:19 vm09 ceph-mon[54524]: osdmap e614: 8 total, 8 up, 8 in 2026-03-09T20:33:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-124"}]: dispatch 2026-03-09T20:33:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:19 vm09 ceph-mon[54524]: pgmap v935: 268 pgs: 268 active+clean; 455 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-09T20:33:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:19 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-124"}]': finished 2026-03-09T20:33:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:19 vm09 ceph-mon[54524]: osdmap e615: 8 total, 8 up, 8 in 2026-03-09T20:33:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:20 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:20 vm09 ceph-mon[54524]: osdmap e616: 8 total, 8 up, 8 in 2026-03-09T20:33:20.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:20 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:20.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:20 vm05 ceph-mon[51870]: osdmap e616: 8 total, 8 up, 8 in 2026-03-09T20:33:20.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:20 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:20.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:20 vm05 ceph-mon[61345]: osdmap e616: 8 total, 8 up, 8 in 2026-03-09T20:33:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:21 vm09 ceph-mon[54524]: pgmap v937: 268 pgs: 268 active+clean; 455 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:33:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:21 vm09 ceph-mon[54524]: osdmap e617: 8 total, 8 up, 8 in 2026-03-09T20:33:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:21 vm05 ceph-mon[51870]: pgmap v937: 268 pgs: 268 active+clean; 455 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:33:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:21 vm05 ceph-mon[51870]: osdmap e617: 8 total, 8 up, 8 in 2026-03-09T20:33:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:21 vm05 ceph-mon[61345]: pgmap v937: 268 pgs: 268 active+clean; 455 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-09T20:33:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:21 vm05 ceph-mon[61345]: osdmap e617: 8 total, 8 up, 8 in 2026-03-09T20:33:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:23 vm09 ceph-mon[54524]: pgmap v940: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:33:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:23 vm09 ceph-mon[54524]: osdmap e618: 8 total, 8 up, 8 in 2026-03-09T20:33:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:23 vm05 ceph-mon[51870]: pgmap v940: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:33:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:23 vm05 ceph-mon[51870]: osdmap e618: 8 total, 8 up, 8 in 2026-03-09T20:33:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:23 vm05 ceph-mon[61345]: pgmap v940: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:33:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:23 vm05 ceph-mon[61345]: osdmap e618: 8 total, 8 up, 8 in 2026-03-09T20:33:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:24.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:24 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:24.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:24 vm09 ceph-mon[54524]: osdmap e619: 8 total, 8 up, 8 in 2026-03-09T20:33:24.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:24 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-126"}]: dispatch 2026-03-09T20:33:24.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:24 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-126"}]': finished 2026-03-09T20:33:24.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:24 vm09 ceph-mon[54524]: osdmap e620: 8 total, 8 up, 8 in 2026-03-09T20:33:24.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:24 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-126", "mode": "writeback"}]: dispatch 2026-03-09T20:33:24.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:24 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:24.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:24 vm05 ceph-mon[61345]: osdmap e619: 8 total, 8 up, 8 in 2026-03-09T20:33:24.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:24 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-126"}]: dispatch 2026-03-09T20:33:24.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:24 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-126"}]': finished 2026-03-09T20:33:24.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:24 vm05 ceph-mon[61345]: osdmap e620: 8 total, 8 up, 8 in 2026-03-09T20:33:24.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:24 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-126", "mode": "writeback"}]: dispatch 2026-03-09T20:33:24.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:24 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:24.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:24 vm05 ceph-mon[51870]: osdmap e619: 8 total, 8 up, 8 in 2026-03-09T20:33:24.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:24 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-126"}]: dispatch 2026-03-09T20:33:24.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:24 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-126"}]': finished 2026-03-09T20:33:24.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:24 vm05 ceph-mon[51870]: osdmap e620: 8 total, 8 up, 8 in 2026-03-09T20:33:24.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:24 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-126", "mode": "writeback"}]: dispatch 2026-03-09T20:33:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:25 vm09 ceph-mon[54524]: pgmap v943: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:25 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-126", "mode": "writeback"}]': finished 2026-03-09T20:33:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:25 vm09 ceph-mon[54524]: osdmap e621: 8 total, 8 up, 8 in 2026-03-09T20:33:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:25 vm05 ceph-mon[61345]: pgmap v943: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:25 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-126", "mode": "writeback"}]': finished 2026-03-09T20:33:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:25 vm05 ceph-mon[61345]: osdmap e621: 8 total, 8 up, 8 in 2026-03-09T20:33:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:25 vm05 ceph-mon[51870]: pgmap v943: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:25 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-126", "mode": "writeback"}]': finished 2026-03-09T20:33:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:25 vm05 ceph-mon[51870]: osdmap e621: 8 total, 8 up, 8 in 2026-03-09T20:33:26.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:33:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:33:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:26 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:26 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:26 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:27 vm05 ceph-mon[61345]: pgmap v946: 268 pgs: 268 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:27 vm05 ceph-mon[61345]: osdmap e622: 8 total, 8 up, 8 in 2026-03-09T20:33:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-126"}]: dispatch 2026-03-09T20:33:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:27 vm05 ceph-mon[51870]: pgmap v946: 268 pgs: 268 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:27 vm05 ceph-mon[51870]: osdmap e622: 8 total, 8 up, 8 in 2026-03-09T20:33:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-126"}]: dispatch 2026-03-09T20:33:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:27 vm09 ceph-mon[54524]: pgmap v946: 268 pgs: 268 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:27 vm09 ceph-mon[54524]: osdmap e622: 8 total, 8 up, 8 in 2026-03-09T20:33:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-126"}]: dispatch 2026-03-09T20:33:28.645 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:33:28.645 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TierFlushDuringFlush (9215 ms) 2026-03-09T20:33:28.645 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapHasChunk 2026-03-09T20:33:28.645 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:33:28.645 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapHasChunk (6254 ms) 2026-03-09T20:33:28.645 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRollback 2026-03-09T20:33:28.645 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRollback (5538 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRollbackRefcount 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRollbackRefcount (24679 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvictRollback 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvictRollback (12941 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PropagateBaseTierError 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PropagateBaseTierError (12293 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HelloWriteReturn 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 00000000 79 6f 75 20 6d 69 67 68 74 20 73 65 65 20 74 68 |you might see th| 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 00000010 69 73 |is| 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 00000012 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HelloWriteReturn (12135 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TierFlushDuringUnsetDedupTier 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TierFlushDuringUnsetDedupTier (6115 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 48 tests from LibRadosTwoPoolsPP (558062 ms total) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 4 tests from LibRadosTierECPP 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.Dirty 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.Dirty (1030 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.FlushWriteRaces 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.FlushWriteRaces (11075 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.CallForcesPromote 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.CallForcesPromote (18271 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.HitSetNone 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.HitSetNone (8 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 4 tests from LibRadosTierECPP (30384 ms total) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 22 tests from LibRadosTwoPoolsECPP 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Overlay 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Overlay (7115 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Promote 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Promote (8098 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteSnap 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: waiting for scrub... 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: done waiting 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteSnap (24780 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteSnapTrimRace 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteSnapTrimRace (10091 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Whiteout 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Whiteout (7605 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Evict 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Evict (8036 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.EvictSnap 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.EvictSnap (10095 ms) 2026-03-09T20:33:28.646 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TryFlush 2026-03-09T20:33:28.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:28 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:28.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-126"}]': finished 2026-03-09T20:33:28.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:28 vm05 ceph-mon[61345]: osdmap e623: 8 total, 8 up, 8 in 2026-03-09T20:33:28.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:28 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:28.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-126"}]': finished 2026-03-09T20:33:28.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:28 vm05 ceph-mon[51870]: osdmap e623: 8 total, 8 up, 8 in 2026-03-09T20:33:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:33:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:33:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:33:29.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:28 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:29.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-126"}]': finished 2026-03-09T20:33:29.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:28 vm09 ceph-mon[54524]: osdmap e623: 8 total, 8 up, 8 in 2026-03-09T20:33:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:29 vm05 ceph-mon[61345]: pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:29 vm05 ceph-mon[61345]: osdmap e624: 8 total, 8 up, 8 in 2026-03-09T20:33:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:29 vm05 ceph-mon[51870]: pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:29 vm05 ceph-mon[51870]: osdmap e624: 8 total, 8 up, 8 in 2026-03-09T20:33:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:29 vm09 ceph-mon[54524]: pgmap v949: 268 pgs: 268 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:33:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:29 vm09 ceph-mon[54524]: osdmap e624: 8 total, 8 up, 8 in 2026-03-09T20:33:31.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:30 vm05 ceph-mon[61345]: osdmap e625: 8 total, 8 up, 8 in 2026-03-09T20:33:31.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:31.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:33:31.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:30 vm05 ceph-mon[51870]: osdmap e625: 8 total, 8 up, 8 in 2026-03-09T20:33:31.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:31.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:33:31.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:30 vm09 ceph-mon[54524]: osdmap e625: 8 total, 8 up, 8 in 2026-03-09T20:33:31.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:31.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[61345]: pgmap v952: 268 pgs: 6 creating+peering, 26 unknown, 236 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[61345]: osdmap e626: 8 total, 8 up, 8 in 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[61345]: osdmap e627: 8 total, 8 up, 8 in 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-128"}]: dispatch 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[51870]: pgmap v952: 268 pgs: 6 creating+peering, 26 unknown, 236 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[51870]: osdmap e626: 8 total, 8 up, 8 in 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[51870]: osdmap e627: 8 total, 8 up, 8 in 2026-03-09T20:33:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-128"}]: dispatch 2026-03-09T20:33:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:31 vm09 ceph-mon[54524]: pgmap v952: 268 pgs: 6 creating+peering, 26 unknown, 236 active+clean; 455 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:33:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:31 vm09 ceph-mon[54524]: osdmap e626: 8 total, 8 up, 8 in 2026-03-09T20:33:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:31 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:31 vm09 ceph-mon[54524]: osdmap e627: 8 total, 8 up, 8 in 2026-03-09T20:33:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-128"}]: dispatch 2026-03-09T20:33:33.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-128"}]': finished 2026-03-09T20:33:33.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:33 vm05 ceph-mon[61345]: osdmap e628: 8 total, 8 up, 8 in 2026-03-09T20:33:33.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-128", "mode": "writeback"}]: dispatch 2026-03-09T20:33:33.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:33 vm05 ceph-mon[61345]: pgmap v956: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:33:33.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-128"}]': finished 2026-03-09T20:33:33.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:33 vm05 ceph-mon[51870]: osdmap e628: 8 total, 8 up, 8 in 2026-03-09T20:33:33.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-128", "mode": "writeback"}]: dispatch 2026-03-09T20:33:33.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:33 vm05 ceph-mon[51870]: pgmap v956: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:33:33.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-128"}]': finished 2026-03-09T20:33:33.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:33 vm09 ceph-mon[54524]: osdmap e628: 8 total, 8 up, 8 in 2026-03-09T20:33:33.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-128", "mode": "writeback"}]: dispatch 2026-03-09T20:33:33.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:33 vm09 ceph-mon[54524]: pgmap v956: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:33:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:34 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-128", "mode": "writeback"}]': finished 2026-03-09T20:33:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:34 vm05 ceph-mon[61345]: osdmap e629: 8 total, 8 up, 8 in 2026-03-09T20:33:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:34 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-128", "mode": "writeback"}]': finished 2026-03-09T20:33:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:34 vm05 ceph-mon[51870]: osdmap e629: 8 total, 8 up, 8 in 2026-03-09T20:33:34.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:34 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:34.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-128", "mode": "writeback"}]': finished 2026-03-09T20:33:34.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:34 vm09 ceph-mon[54524]: osdmap e629: 8 total, 8 up, 8 in 2026-03-09T20:33:35.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:35 vm05 ceph-mon[61345]: pgmap v958: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:35.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:35 vm05 ceph-mon[51870]: pgmap v958: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:35.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:35 vm09 ceph-mon[54524]: pgmap v958: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:36.140 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:33:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:33:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:36 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:36 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:36.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:36.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:36 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:37.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:37 vm05 ceph-mon[61345]: pgmap v959: 268 pgs: 268 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 368 B/s wr, 1 op/s 2026-03-09T20:33:37.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:37 vm05 ceph-mon[51870]: pgmap v959: 268 pgs: 268 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 368 B/s wr, 1 op/s 2026-03-09T20:33:37.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:37 vm09 ceph-mon[54524]: pgmap v959: 268 pgs: 268 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 368 B/s wr, 1 op/s 2026-03-09T20:33:38.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:38 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:38.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:38 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:38.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:38 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:33:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:33:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:33:39.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:39.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[61345]: osdmap e630: 8 total, 8 up, 8 in 2026-03-09T20:33:39.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-128"}]: dispatch 2026-03-09T20:33:39.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[61345]: pgmap v961: 268 pgs: 268 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 984 B/s rd, 328 B/s wr, 1 op/s 2026-03-09T20:33:39.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:39.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-128"}]': finished 2026-03-09T20:33:39.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[61345]: osdmap e631: 8 total, 8 up, 8 in 2026-03-09T20:33:39.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:39.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[51870]: osdmap e630: 8 total, 8 up, 8 in 2026-03-09T20:33:39.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-128"}]: dispatch 2026-03-09T20:33:39.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[51870]: pgmap v961: 268 pgs: 268 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 984 B/s rd, 328 B/s wr, 1 op/s 2026-03-09T20:33:39.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:39.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-128"}]': finished 2026-03-09T20:33:39.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:39 vm05 ceph-mon[51870]: osdmap e631: 8 total, 8 up, 8 in 2026-03-09T20:33:39.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:39.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:39 vm09 ceph-mon[54524]: osdmap e630: 8 total, 8 up, 8 in 2026-03-09T20:33:39.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-128"}]: dispatch 2026-03-09T20:33:39.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:39 vm09 ceph-mon[54524]: pgmap v961: 268 pgs: 268 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 984 B/s rd, 328 B/s wr, 1 op/s 2026-03-09T20:33:39.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:39 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:39.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-128"}]': finished 2026-03-09T20:33:39.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:39 vm09 ceph-mon[54524]: osdmap e631: 8 total, 8 up, 8 in 2026-03-09T20:33:41.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:41 vm05 ceph-mon[61345]: osdmap e632: 8 total, 8 up, 8 in 2026-03-09T20:33:41.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:41 vm05 ceph-mon[61345]: pgmap v964: 236 pgs: 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T20:33:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:41 vm05 ceph-mon[51870]: osdmap e632: 8 total, 8 up, 8 in 2026-03-09T20:33:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:41 vm05 ceph-mon[51870]: pgmap v964: 236 pgs: 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T20:33:41.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:41 vm09 ceph-mon[54524]: osdmap e632: 8 total, 8 up, 8 in 2026-03-09T20:33:41.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:41 vm09 ceph-mon[54524]: pgmap v964: 236 pgs: 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T20:33:42.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:42 vm05 ceph-mon[61345]: osdmap e633: 8 total, 8 up, 8 in 2026-03-09T20:33:42.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:42 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:42.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:42 vm05 ceph-mon[51870]: osdmap e633: 8 total, 8 up, 8 in 2026-03-09T20:33:42.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:42 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:42.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:42 vm09 ceph-mon[54524]: osdmap e633: 8 total, 8 up, 8 in 2026-03-09T20:33:42.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:42 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[61345]: osdmap e634: 8 total, 8 up, 8 in 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[61345]: pgmap v967: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[61345]: osdmap e635: 8 total, 8 up, 8 in 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-130"}]: dispatch 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[51870]: osdmap e634: 8 total, 8 up, 8 in 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[51870]: pgmap v967: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[51870]: osdmap e635: 8 total, 8 up, 8 in 2026-03-09T20:33:43.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:43 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-130"}]: dispatch 2026-03-09T20:33:43.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:43 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:43.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:43 vm09 ceph-mon[54524]: osdmap e634: 8 total, 8 up, 8 in 2026-03-09T20:33:43.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:43 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:43.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:43 vm09 ceph-mon[54524]: pgmap v967: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:33:43.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:43 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:43.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:43 vm09 ceph-mon[54524]: osdmap e635: 8 total, 8 up, 8 in 2026-03-09T20:33:43.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:43 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-130"}]: dispatch 2026-03-09T20:33:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-130"}]': finished 2026-03-09T20:33:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:45 vm05 ceph-mon[61345]: osdmap e636: 8 total, 8 up, 8 in 2026-03-09T20:33:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-130", "mode": "writeback"}]: dispatch 2026-03-09T20:33:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:45 vm05 ceph-mon[61345]: pgmap v970: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:33:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-130"}]': finished 2026-03-09T20:33:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:45 vm05 ceph-mon[51870]: osdmap e636: 8 total, 8 up, 8 in 2026-03-09T20:33:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-130", "mode": "writeback"}]: dispatch 2026-03-09T20:33:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:45 vm05 ceph-mon[51870]: pgmap v970: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:45.661 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:33:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-130"}]': finished 2026-03-09T20:33:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:45 vm09 ceph-mon[54524]: osdmap e636: 8 total, 8 up, 8 in 2026-03-09T20:33:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-130", "mode": "writeback"}]: dispatch 2026-03-09T20:33:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:45 vm09 ceph-mon[54524]: pgmap v970: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1014 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:33:46.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:33:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:33:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:46 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-130", "mode": "writeback"}]': finished 2026-03-09T20:33:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:46 vm05 ceph-mon[61345]: osdmap e637: 8 total, 8 up, 8 in 2026-03-09T20:33:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:46 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-130", "mode": "writeback"}]': finished 2026-03-09T20:33:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:46 vm05 ceph-mon[51870]: osdmap e637: 8 total, 8 up, 8 in 2026-03-09T20:33:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:46 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-130", "mode": "writeback"}]': finished 2026-03-09T20:33:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:46 vm09 ceph-mon[54524]: osdmap e637: 8 total, 8 up, 8 in 2026-03-09T20:33:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:47 vm05 ceph-mon[61345]: osdmap e638: 8 total, 8 up, 8 in 2026-03-09T20:33:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:47 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-130"}]: dispatch 2026-03-09T20:33:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:47 vm05 ceph-mon[61345]: pgmap v973: 268 pgs: 268 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:33:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:47 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:47 vm05 ceph-mon[51870]: osdmap e638: 8 total, 8 up, 8 in 2026-03-09T20:33:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:47 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-130"}]: dispatch 2026-03-09T20:33:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:47 vm05 ceph-mon[51870]: pgmap v973: 268 pgs: 268 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:33:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:47 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:47 vm09 ceph-mon[54524]: osdmap e638: 8 total, 8 up, 8 in 2026-03-09T20:33:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:47 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-130"}]: dispatch 2026-03-09T20:33:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:47 vm09 ceph-mon[54524]: pgmap v973: 268 pgs: 268 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:33:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:47 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:33:48.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:48 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-130"}]': finished 2026-03-09T20:33:48.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:48 vm05 ceph-mon[61345]: osdmap e639: 8 total, 8 up, 8 in 2026-03-09T20:33:48.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:48 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-130"}]': finished 2026-03-09T20:33:48.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:48 vm05 ceph-mon[51870]: osdmap e639: 8 total, 8 up, 8 in 2026-03-09T20:33:48.660 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:33:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:33:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:33:48.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:48 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-130"}]': finished 2026-03-09T20:33:48.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:48 vm09 ceph-mon[54524]: osdmap e639: 8 total, 8 up, 8 in 2026-03-09T20:33:49.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:49 vm09 ceph-mon[54524]: pgmap v975: 268 pgs: 268 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 503 B/s wr, 1 op/s 2026-03-09T20:33:49.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:49 vm09 ceph-mon[54524]: osdmap e640: 8 total, 8 up, 8 in 2026-03-09T20:33:49.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:49 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:49 vm05 ceph-mon[61345]: pgmap v975: 268 pgs: 268 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 503 B/s wr, 1 op/s 2026-03-09T20:33:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:49 vm05 ceph-mon[61345]: osdmap e640: 8 total, 8 up, 8 in 2026-03-09T20:33:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:49 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:49 vm05 ceph-mon[51870]: pgmap v975: 268 pgs: 268 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 503 B/s wr, 1 op/s 2026-03-09T20:33:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:49 vm05 ceph-mon[51870]: osdmap e640: 8 total, 8 up, 8 in 2026-03-09T20:33:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:49 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:50.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:50 vm09 ceph-mon[54524]: osdmap e641: 8 total, 8 up, 8 in 2026-03-09T20:33:50.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:50 vm05 ceph-mon[61345]: osdmap e641: 8 total, 8 up, 8 in 2026-03-09T20:33:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:50 vm05 ceph-mon[51870]: osdmap e641: 8 total, 8 up, 8 in 2026-03-09T20:33:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:33:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:51 vm09 ceph-mon[54524]: pgmap v978: 268 pgs: 19 creating+peering, 13 unknown, 236 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:33:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:51 vm09 ceph-mon[54524]: osdmap e642: 8 total, 8 up, 8 in 2026-03-09T20:33:51.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:51 vm05 ceph-mon[61345]: pgmap v978: 268 pgs: 19 creating+peering, 13 unknown, 236 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:33:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:51 vm05 ceph-mon[61345]: osdmap e642: 8 total, 8 up, 8 in 2026-03-09T20:33:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:51 vm05 ceph-mon[51870]: pgmap v978: 268 pgs: 19 creating+peering, 13 unknown, 236 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:33:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:33:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:51 vm05 ceph-mon[51870]: osdmap e642: 8 total, 8 up, 8 in 2026-03-09T20:33:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:33:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:52 vm05 ceph-mon[61345]: osdmap e643: 8 total, 8 up, 8 in 2026-03-09T20:33:52.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-132"}]: dispatch 2026-03-09T20:33:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:52 vm05 ceph-mon[51870]: osdmap e643: 8 total, 8 up, 8 in 2026-03-09T20:33:52.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-132"}]: dispatch 2026-03-09T20:33:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:33:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:52 vm09 ceph-mon[54524]: osdmap e643: 8 total, 8 up, 8 in 2026-03-09T20:33:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-132"}]: dispatch 2026-03-09T20:33:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[51870]: pgmap v981: 268 pgs: 23 creating+peering, 245 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:33:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-132"}]': finished 2026-03-09T20:33:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[51870]: osdmap e644: 8 total, 8 up, 8 in 2026-03-09T20:33:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-132", "mode": "writeback"}]: dispatch 2026-03-09T20:33:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-132", "mode": "writeback"}]': finished 2026-03-09T20:33:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[51870]: osdmap e645: 8 total, 8 up, 8 in 2026-03-09T20:33:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[61345]: pgmap v981: 268 pgs: 23 creating+peering, 245 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:33:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-132"}]': finished 2026-03-09T20:33:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[61345]: osdmap e644: 8 total, 8 up, 8 in 2026-03-09T20:33:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-132", "mode": "writeback"}]: dispatch 2026-03-09T20:33:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:53.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-132", "mode": "writeback"}]': finished 2026-03-09T20:33:53.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:53 vm05 ceph-mon[61345]: osdmap e645: 8 total, 8 up, 8 in 2026-03-09T20:33:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:53 vm09 ceph-mon[54524]: pgmap v981: 268 pgs: 23 creating+peering, 245 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:33:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-132"}]': finished 2026-03-09T20:33:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:53 vm09 ceph-mon[54524]: osdmap e644: 8 total, 8 up, 8 in 2026-03-09T20:33:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-132", "mode": "writeback"}]: dispatch 2026-03-09T20:33:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:53 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:33:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-132", "mode": "writeback"}]': finished 2026-03-09T20:33:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:53 vm09 ceph-mon[54524]: osdmap e645: 8 total, 8 up, 8 in 2026-03-09T20:33:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:55 vm05 ceph-mon[61345]: pgmap v984: 268 pgs: 23 creating+peering, 245 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:55 vm05 ceph-mon[61345]: osdmap e646: 8 total, 8 up, 8 in 2026-03-09T20:33:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:55 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:33:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:55 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:33:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:55 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:33:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:55 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:33:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:55 vm05 ceph-mon[51870]: pgmap v984: 268 pgs: 23 creating+peering, 245 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:55 vm05 ceph-mon[51870]: osdmap e646: 8 total, 8 up, 8 in 2026-03-09T20:33:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:55 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:33:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:55 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:33:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:55 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:33:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:55 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:33:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:55 vm09 ceph-mon[54524]: pgmap v984: 268 pgs: 23 creating+peering, 245 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:33:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:55 vm09 ceph-mon[54524]: osdmap e646: 8 total, 8 up, 8 in 2026-03-09T20:33:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:55 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:33:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:55 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:33:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:55 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:33:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:55 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:33:56.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:33:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:33:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:56 vm05 ceph-mon[51870]: osdmap e647: 8 total, 8 up, 8 in 2026-03-09T20:33:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:56 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:56 vm05 ceph-mon[61345]: osdmap e647: 8 total, 8 up, 8 in 2026-03-09T20:33:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:56 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:56 vm09 ceph-mon[54524]: osdmap e647: 8 total, 8 up, 8 in 2026-03-09T20:33:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:33:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:33:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:56 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:33:57.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:57 vm05 ceph-mon[51870]: pgmap v987: 268 pgs: 268 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:33:57.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:57.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:57 vm05 ceph-mon[51870]: osdmap e648: 8 total, 8 up, 8 in 2026-03-09T20:33:57.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-132"}]: dispatch 2026-03-09T20:33:57.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:57 vm05 ceph-mon[61345]: pgmap v987: 268 pgs: 268 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:33:57.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:57.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:57 vm05 ceph-mon[61345]: osdmap e648: 8 total, 8 up, 8 in 2026-03-09T20:33:57.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-132"}]: dispatch 2026-03-09T20:33:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:57 vm09 ceph-mon[54524]: pgmap v987: 268 pgs: 268 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:33:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:33:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:57 vm09 ceph-mon[54524]: osdmap e648: 8 total, 8 up, 8 in 2026-03-09T20:33:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-132"}]: dispatch 2026-03-09T20:33:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-132"}]': finished 2026-03-09T20:33:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:58 vm05 ceph-mon[51870]: osdmap e649: 8 total, 8 up, 8 in 2026-03-09T20:33:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-132"}]': finished 2026-03-09T20:33:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:58 vm05 ceph-mon[61345]: osdmap e649: 8 total, 8 up, 8 in 2026-03-09T20:33:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:33:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:33:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:33:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-132"}]': finished 2026-03-09T20:33:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:58 vm09 ceph-mon[54524]: osdmap e649: 8 total, 8 up, 8 in 2026-03-09T20:34:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:59 vm09 ceph-mon[54524]: pgmap v990: 268 pgs: 268 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:34:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:59 vm09 ceph-mon[54524]: osdmap e650: 8 total, 8 up, 8 in 2026-03-09T20:34:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:33:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:59 vm05 ceph-mon[51870]: pgmap v990: 268 pgs: 268 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:34:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:59 vm05 ceph-mon[51870]: osdmap e650: 8 total, 8 up, 8 in 2026-03-09T20:34:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:33:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:59 vm05 ceph-mon[61345]: pgmap v990: 268 pgs: 268 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:34:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:59 vm05 ceph-mon[61345]: osdmap e650: 8 total, 8 up, 8 in 2026-03-09T20:34:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:33:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:34:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:00 vm09 ceph-mon[54524]: osdmap e651: 8 total, 8 up, 8 in 2026-03-09T20:34:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-132"}]: dispatch 2026-03-09T20:34:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:00 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:34:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:34:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:00 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:34:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:34:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:00 vm05 ceph-mon[61345]: osdmap e651: 8 total, 8 up, 8 in 2026-03-09T20:34:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-132"}]: dispatch 2026-03-09T20:34:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:00 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:34:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:34:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:00 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:34:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:34:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:00 vm05 ceph-mon[51870]: osdmap e651: 8 total, 8 up, 8 in 2026-03-09T20:34:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-132"}]: dispatch 2026-03-09T20:34:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:00 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:34:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:34:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:00 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:34:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:01 vm09 ceph-mon[54524]: pgmap v993: 268 pgs: 268 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:34:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:01 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-132"}]': finished 2026-03-09T20:34:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:01 vm09 ceph-mon[54524]: osdmap e652: 8 total, 8 up, 8 in 2026-03-09T20:34:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:01 vm05 ceph-mon[51870]: pgmap v993: 268 pgs: 268 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:34:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:01 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-132"}]': finished 2026-03-09T20:34:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:01 vm05 ceph-mon[51870]: osdmap e652: 8 total, 8 up, 8 in 2026-03-09T20:34:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:01 vm05 ceph-mon[61345]: pgmap v993: 268 pgs: 268 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:34:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:01 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-132"}]': finished 2026-03-09T20:34:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:01 vm05 ceph-mon[61345]: osdmap e652: 8 total, 8 up, 8 in 2026-03-09T20:34:03.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:02 vm09 ceph-mon[54524]: osdmap e653: 8 total, 8 up, 8 in 2026-03-09T20:34:03.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:02 vm09 ceph-mon[54524]: osdmap e654: 8 total, 8 up, 8 in 2026-03-09T20:34:03.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:02 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:03.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:02 vm05 ceph-mon[51870]: osdmap e653: 8 total, 8 up, 8 in 2026-03-09T20:34:03.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:02 vm05 ceph-mon[51870]: osdmap e654: 8 total, 8 up, 8 in 2026-03-09T20:34:03.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:02 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:03.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:02 vm05 ceph-mon[61345]: osdmap e653: 8 total, 8 up, 8 in 2026-03-09T20:34:03.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:02 vm05 ceph-mon[61345]: osdmap e654: 8 total, 8 up, 8 in 2026-03-09T20:34:03.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:02 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:04.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:03 vm09 ceph-mon[54524]: pgmap v996: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:34:04.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:04.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:03 vm09 ceph-mon[54524]: osdmap e655: 8 total, 8 up, 8 in 2026-03-09T20:34:04.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:03 vm05 ceph-mon[51870]: pgmap v996: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:34:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:03 vm05 ceph-mon[51870]: osdmap e655: 8 total, 8 up, 8 in 2026-03-09T20:34:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:03 vm05 ceph-mon[61345]: pgmap v996: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:34:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:03 vm05 ceph-mon[61345]: osdmap e655: 8 total, 8 up, 8 in 2026-03-09T20:34:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:06.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:05 vm09 ceph-mon[54524]: pgmap v999: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:34:06.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:06.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:05 vm09 ceph-mon[54524]: osdmap e656: 8 total, 8 up, 8 in 2026-03-09T20:34:06.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-134"}]: dispatch 2026-03-09T20:34:06.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:34:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:34:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:05 vm05 ceph-mon[51870]: pgmap v999: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:34:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:05 vm05 ceph-mon[51870]: osdmap e656: 8 total, 8 up, 8 in 2026-03-09T20:34:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-134"}]: dispatch 2026-03-09T20:34:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:05 vm05 ceph-mon[61345]: pgmap v999: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:34:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:05 vm05 ceph-mon[61345]: osdmap e656: 8 total, 8 up, 8 in 2026-03-09T20:34:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-134"}]: dispatch 2026-03-09T20:34:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-134"}]': finished 2026-03-09T20:34:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:06 vm09 ceph-mon[54524]: osdmap e657: 8 total, 8 up, 8 in 2026-03-09T20:34:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-134", "mode": "writeback"}]: dispatch 2026-03-09T20:34:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:06 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:34:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-134", "mode": "writeback"}]': finished 2026-03-09T20:34:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:06 vm09 ceph-mon[54524]: osdmap e658: 8 total, 8 up, 8 in 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-134"}]': finished 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[51870]: osdmap e657: 8 total, 8 up, 8 in 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-134", "mode": "writeback"}]: dispatch 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-134", "mode": "writeback"}]': finished 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[51870]: osdmap e658: 8 total, 8 up, 8 in 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-134"}]': finished 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[61345]: osdmap e657: 8 total, 8 up, 8 in 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-134", "mode": "writeback"}]: dispatch 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-134", "mode": "writeback"}]': finished 2026-03-09T20:34:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:06 vm05 ceph-mon[61345]: osdmap e658: 8 total, 8 up, 8 in 2026-03-09T20:34:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:07 vm09 ceph-mon[54524]: pgmap v1002: 268 pgs: 268 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:08.061 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:07 vm05 ceph-mon[51870]: pgmap v1002: 268 pgs: 268 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:08.061 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:08.061 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:07 vm05 ceph-mon[61345]: pgmap v1002: 268 pgs: 268 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:08.061 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:08.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:34:08.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:08 vm05 ceph-mon[51870]: osdmap e659: 8 total, 8 up, 8 in 2026-03-09T20:34:08.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:08 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-134"}]: dispatch 2026-03-09T20:34:08.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:34:08.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:08 vm05 ceph-mon[61345]: osdmap e659: 8 total, 8 up, 8 in 2026-03-09T20:34:08.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:08 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-134"}]: dispatch 2026-03-09T20:34:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:34:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:34:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:34:09.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:34:09.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:08 vm09 ceph-mon[54524]: osdmap e659: 8 total, 8 up, 8 in 2026-03-09T20:34:09.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:08 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-134"}]: dispatch 2026-03-09T20:34:10.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:09 vm09 ceph-mon[54524]: pgmap v1005: 268 pgs: 268 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:10.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:09 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:34:10.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:09 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-134"}]': finished 2026-03-09T20:34:10.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:09 vm09 ceph-mon[54524]: osdmap e660: 8 total, 8 up, 8 in 2026-03-09T20:34:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:09 vm05 ceph-mon[51870]: pgmap v1005: 268 pgs: 268 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:09 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:34:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:09 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-134"}]': finished 2026-03-09T20:34:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:09 vm05 ceph-mon[51870]: osdmap e660: 8 total, 8 up, 8 in 2026-03-09T20:34:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:09 vm05 ceph-mon[61345]: pgmap v1005: 268 pgs: 268 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:09 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:34:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:09 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-134"}]': finished 2026-03-09T20:34:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:09 vm05 ceph-mon[61345]: osdmap e660: 8 total, 8 up, 8 in 2026-03-09T20:34:11.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:10 vm09 ceph-mon[54524]: osdmap e661: 8 total, 8 up, 8 in 2026-03-09T20:34:11.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:10 vm05 ceph-mon[51870]: osdmap e661: 8 total, 8 up, 8 in 2026-03-09T20:34:11.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:10 vm05 ceph-mon[61345]: osdmap e661: 8 total, 8 up, 8 in 2026-03-09T20:34:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:11 vm09 ceph-mon[54524]: pgmap v1008: 236 pgs: 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T20:34:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:11 vm09 ceph-mon[54524]: osdmap e662: 8 total, 8 up, 8 in 2026-03-09T20:34:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:11 vm09 ceph-mon[54524]: osdmap e663: 8 total, 8 up, 8 in 2026-03-09T20:34:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:11 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:11 vm05 ceph-mon[51870]: pgmap v1008: 236 pgs: 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T20:34:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:11 vm05 ceph-mon[51870]: osdmap e662: 8 total, 8 up, 8 in 2026-03-09T20:34:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:11 vm05 ceph-mon[51870]: osdmap e663: 8 total, 8 up, 8 in 2026-03-09T20:34:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:11 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:11 vm05 ceph-mon[61345]: pgmap v1008: 236 pgs: 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-09T20:34:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:11 vm05 ceph-mon[61345]: osdmap e662: 8 total, 8 up, 8 in 2026-03-09T20:34:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:11 vm05 ceph-mon[61345]: osdmap e663: 8 total, 8 up, 8 in 2026-03-09T20:34:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:11 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:13.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:13.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:13 vm09 ceph-mon[54524]: osdmap e664: 8 total, 8 up, 8 in 2026-03-09T20:34:13.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:13 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-136"}]: dispatch 2026-03-09T20:34:13.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:13 vm09 ceph-mon[54524]: pgmap v1012: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.4 KiB/s wr, 2 op/s 2026-03-09T20:34:13.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:13.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:13 vm05 ceph-mon[51870]: osdmap e664: 8 total, 8 up, 8 in 2026-03-09T20:34:13.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:13 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-136"}]: dispatch 2026-03-09T20:34:13.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:13 vm05 ceph-mon[51870]: pgmap v1012: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.4 KiB/s wr, 2 op/s 2026-03-09T20:34:13.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:13.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:13 vm05 ceph-mon[61345]: osdmap e664: 8 total, 8 up, 8 in 2026-03-09T20:34:13.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:13 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-136"}]: dispatch 2026-03-09T20:34:13.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:13 vm05 ceph-mon[61345]: pgmap v1012: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.4 KiB/s wr, 2 op/s 2026-03-09T20:34:14.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:14 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-136"}]': finished 2026-03-09T20:34:14.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:14 vm09 ceph-mon[54524]: osdmap e665: 8 total, 8 up, 8 in 2026-03-09T20:34:14.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:14 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-136", "mode": "writeback"}]: dispatch 2026-03-09T20:34:14.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:14 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:34:14.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:14 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-136"}]': finished 2026-03-09T20:34:14.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:14 vm05 ceph-mon[51870]: osdmap e665: 8 total, 8 up, 8 in 2026-03-09T20:34:14.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:14 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-136", "mode": "writeback"}]: dispatch 2026-03-09T20:34:14.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:14 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:34:14.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:14 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-136"}]': finished 2026-03-09T20:34:14.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:14 vm05 ceph-mon[61345]: osdmap e665: 8 total, 8 up, 8 in 2026-03-09T20:34:14.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:14 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-136", "mode": "writeback"}]: dispatch 2026-03-09T20:34:14.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:14 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:34:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-136", "mode": "writeback"}]': finished 2026-03-09T20:34:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:15 vm09 ceph-mon[54524]: osdmap e666: 8 total, 8 up, 8 in 2026-03-09T20:34:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:15 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:15 vm09 ceph-mon[54524]: pgmap v1015: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:34:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:34:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-136", "mode": "writeback"}]': finished 2026-03-09T20:34:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:15 vm05 ceph-mon[51870]: osdmap e666: 8 total, 8 up, 8 in 2026-03-09T20:34:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:15 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:15 vm05 ceph-mon[51870]: pgmap v1015: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:34:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:34:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-136", "mode": "writeback"}]': finished 2026-03-09T20:34:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:15 vm05 ceph-mon[61345]: osdmap e666: 8 total, 8 up, 8 in 2026-03-09T20:34:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:15 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:15 vm05 ceph-mon[61345]: pgmap v1015: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail 2026-03-09T20:34:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:34:16.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:34:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:34:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:34:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:16 vm05 ceph-mon[61345]: osdmap e667: 8 total, 8 up, 8 in 2026-03-09T20:34:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-136"}]: dispatch 2026-03-09T20:34:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:34:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:16 vm05 ceph-mon[51870]: osdmap e667: 8 total, 8 up, 8 in 2026-03-09T20:34:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-136"}]: dispatch 2026-03-09T20:34:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:34:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:16 vm09 ceph-mon[54524]: osdmap e667: 8 total, 8 up, 8 in 2026-03-09T20:34:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-136"}]: dispatch 2026-03-09T20:34:16.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:17 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:34:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-136"}]': finished 2026-03-09T20:34:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:17 vm05 ceph-mon[61345]: osdmap e668: 8 total, 8 up, 8 in 2026-03-09T20:34:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:17 vm05 ceph-mon[61345]: pgmap v1018: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 1 op/s 2026-03-09T20:34:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:17 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:34:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-136"}]': finished 2026-03-09T20:34:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:17 vm05 ceph-mon[51870]: osdmap e668: 8 total, 8 up, 8 in 2026-03-09T20:34:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:17 vm05 ceph-mon[51870]: pgmap v1018: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 1 op/s 2026-03-09T20:34:17.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:17 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:34:17.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-136"}]': finished 2026-03-09T20:34:17.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:17 vm09 ceph-mon[54524]: osdmap e668: 8 total, 8 up, 8 in 2026-03-09T20:34:17.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:17 vm09 ceph-mon[54524]: pgmap v1018: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.9 MiB/s wr, 1 op/s 2026-03-09T20:34:18.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:18 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:34:18.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:18 vm05 ceph-mon[61345]: osdmap e669: 8 total, 8 up, 8 in 2026-03-09T20:34:18.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:18 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:34:18.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:18 vm05 ceph-mon[51870]: osdmap e669: 8 total, 8 up, 8 in 2026-03-09T20:34:18.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:18 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:34:18.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:18 vm09 ceph-mon[54524]: osdmap e669: 8 total, 8 up, 8 in 2026-03-09T20:34:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:34:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:34:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:34:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:19 vm05 ceph-mon[61345]: osdmap e670: 8 total, 8 up, 8 in 2026-03-09T20:34:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:19 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:19 vm05 ceph-mon[61345]: pgmap v1021: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T20:34:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:19 vm05 ceph-mon[51870]: osdmap e670: 8 total, 8 up, 8 in 2026-03-09T20:34:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:19 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:19 vm05 ceph-mon[51870]: pgmap v1021: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T20:34:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:19 vm09 ceph-mon[54524]: osdmap e670: 8 total, 8 up, 8 in 2026-03-09T20:34:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:19 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:19 vm09 ceph-mon[54524]: pgmap v1021: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-09T20:34:20.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:20 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:20.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:20 vm05 ceph-mon[61345]: osdmap e671: 8 total, 8 up, 8 in 2026-03-09T20:34:20.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:20 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:20.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:20 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:20.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:20 vm05 ceph-mon[51870]: osdmap e671: 8 total, 8 up, 8 in 2026-03-09T20:34:20.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:20 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:20 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:20 vm09 ceph-mon[54524]: osdmap e671: 8 total, 8 up, 8 in 2026-03-09T20:34:20.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:20 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:21 vm05 ceph-mon[61345]: pgmap v1023: 268 pgs: 13 unknown, 255 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:21 vm05 ceph-mon[61345]: osdmap e672: 8 total, 8 up, 8 in 2026-03-09T20:34:21.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:34:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:21 vm05 ceph-mon[51870]: pgmap v1023: 268 pgs: 13 unknown, 255 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:21 vm05 ceph-mon[51870]: osdmap e672: 8 total, 8 up, 8 in 2026-03-09T20:34:21.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:34:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:21 vm09 ceph-mon[54524]: pgmap v1023: 268 pgs: 13 unknown, 255 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:21 vm09 ceph-mon[54524]: osdmap e672: 8 total, 8 up, 8 in 2026-03-09T20:34:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:34:22.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:34:22.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:22 vm09 ceph-mon[54524]: osdmap e673: 8 total, 8 up, 8 in 2026-03-09T20:34:22.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:34:22.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:34:22.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:22 vm09 ceph-mon[54524]: osdmap e674: 8 total, 8 up, 8 in 2026-03-09T20:34:22.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:22 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:34:22.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:34:22.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:22 vm05 ceph-mon[61345]: osdmap e673: 8 total, 8 up, 8 in 2026-03-09T20:34:22.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:34:22.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:34:22.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:22 vm05 ceph-mon[61345]: osdmap e674: 8 total, 8 up, 8 in 2026-03-09T20:34:22.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:22 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:34:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:34:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:22 vm05 ceph-mon[51870]: osdmap e673: 8 total, 8 up, 8 in 2026-03-09T20:34:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:34:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:34:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:22 vm05 ceph-mon[51870]: osdmap e674: 8 total, 8 up, 8 in 2026-03-09T20:34:22.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:22 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-09T20:34:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:23 vm09 ceph-mon[54524]: pgmap v1026: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T20:34:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:23 vm09 ceph-mon[54524]: osdmap e675: 8 total, 8 up, 8 in 2026-03-09T20:34:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:23 vm05 ceph-mon[61345]: pgmap v1026: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T20:34:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:23 vm05 ceph-mon[61345]: osdmap e675: 8 total, 8 up, 8 in 2026-03-09T20:34:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:23 vm05 ceph-mon[51870]: pgmap v1026: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-09T20:34:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:23 vm05 ceph-mon[51870]: osdmap e675: 8 total, 8 up, 8 in 2026-03-09T20:34:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:25 vm09 ceph-mon[54524]: pgmap v1029: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:34:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-138"}]: dispatch 2026-03-09T20:34:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:25 vm05 ceph-mon[61345]: pgmap v1029: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:34:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-138"}]: dispatch 2026-03-09T20:34:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:25 vm05 ceph-mon[51870]: pgmap v1029: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:34:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-138"}]: dispatch 2026-03-09T20:34:26.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:34:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:34:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-138"}]': finished 2026-03-09T20:34:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:26 vm09 ceph-mon[54524]: osdmap e676: 8 total, 8 up, 8 in 2026-03-09T20:34:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:26.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:26 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:34:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-138"}]': finished 2026-03-09T20:34:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:26 vm05 ceph-mon[61345]: osdmap e676: 8 total, 8 up, 8 in 2026-03-09T20:34:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:26.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:26 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:34:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-138"}]': finished 2026-03-09T20:34:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:26 vm05 ceph-mon[51870]: osdmap e676: 8 total, 8 up, 8 in 2026-03-09T20:34:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:26.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:26 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:34:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:27 vm09 ceph-mon[54524]: pgmap v1031: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T20:34:27.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:27 vm09 ceph-mon[54524]: osdmap e677: 8 total, 8 up, 8 in 2026-03-09T20:34:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:27 vm05 ceph-mon[61345]: pgmap v1031: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T20:34:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:27 vm05 ceph-mon[61345]: osdmap e677: 8 total, 8 up, 8 in 2026-03-09T20:34:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:27 vm05 ceph-mon[51870]: pgmap v1031: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T20:34:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:27 vm05 ceph-mon[51870]: osdmap e677: 8 total, 8 up, 8 in 2026-03-09T20:34:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:28 vm09 ceph-mon[54524]: osdmap e678: 8 total, 8 up, 8 in 2026-03-09T20:34:28.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:28 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:28.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:28 vm05 ceph-mon[61345]: osdmap e678: 8 total, 8 up, 8 in 2026-03-09T20:34:28.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:28 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:28.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:28 vm05 ceph-mon[51870]: osdmap e678: 8 total, 8 up, 8 in 2026-03-09T20:34:28.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:28 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:34:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:34:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:34:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:29 vm09 ceph-mon[54524]: pgmap v1034: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T20:34:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:29 vm09 ceph-mon[54524]: osdmap e679: 8 total, 8 up, 8 in 2026-03-09T20:34:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:29 vm05 ceph-mon[61345]: pgmap v1034: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T20:34:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:29 vm05 ceph-mon[61345]: osdmap e679: 8 total, 8 up, 8 in 2026-03-09T20:34:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:29 vm05 ceph-mon[51870]: pgmap v1034: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T20:34:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:29 vm05 ceph-mon[51870]: osdmap e679: 8 total, 8 up, 8 in 2026-03-09T20:34:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[61345]: osdmap e680: 8 total, 8 up, 8 in 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[61345]: osdmap e681: 8 total, 8 up, 8 in 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[51870]: osdmap e680: 8 total, 8 up, 8 in 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:34:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T20:34:30.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[51870]: osdmap e681: 8 total, 8 up, 8 in 2026-03-09T20:34:30.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:30 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T20:34:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:30 vm09 ceph-mon[54524]: osdmap e680: 8 total, 8 up, 8 in 2026-03-09T20:34:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-09T20:34:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:30 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:34:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:34:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_count","val": "3"}]': finished 2026-03-09T20:34:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:30 vm09 ceph-mon[54524]: osdmap e681: 8 total, 8 up, 8 in 2026-03-09T20:34:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:30 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-09T20:34:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:31 vm05 ceph-mon[61345]: pgmap v1037: 268 pgs: 14 unknown, 254 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T20:34:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:31 vm05 ceph-mon[61345]: osdmap e682: 8 total, 8 up, 8 in 2026-03-09T20:34:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:31 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:34:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:31 vm05 ceph-mon[51870]: pgmap v1037: 268 pgs: 14 unknown, 254 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T20:34:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:31 vm05 ceph-mon[51870]: osdmap e682: 8 total, 8 up, 8 in 2026-03-09T20:34:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:31 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:34:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:31 vm09 ceph-mon[54524]: pgmap v1037: 268 pgs: 14 unknown, 254 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_period","val": "3"}]': finished 2026-03-09T20:34:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:31 vm09 ceph-mon[54524]: osdmap e682: 8 total, 8 up, 8 in 2026-03-09T20:34:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:31 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:34:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:33 vm05 ceph-mon[61345]: pgmap v1040: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:34:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:33 vm05 ceph-mon[61345]: osdmap e683: 8 total, 8 up, 8 in 2026-03-09T20:34:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:33 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T20:34:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:33 vm05 ceph-mon[51870]: pgmap v1040: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:34:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:33 vm05 ceph-mon[51870]: osdmap e683: 8 total, 8 up, 8 in 2026-03-09T20:34:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:33 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T20:34:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:33 vm09 ceph-mon[54524]: pgmap v1040: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:34:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:33 vm09 ceph-mon[54524]: osdmap e683: 8 total, 8 up, 8 in 2026-03-09T20:34:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:33 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-09T20:34:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:34 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T20:34:34.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:34 vm05 ceph-mon[61345]: osdmap e684: 8 total, 8 up, 8 in 2026-03-09T20:34:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:34 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T20:34:34.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:34 vm05 ceph-mon[51870]: osdmap e684: 8 total, 8 up, 8 in 2026-03-09T20:34:35.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:34 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-09T20:34:35.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:34 vm09 ceph-mon[54524]: osdmap e684: 8 total, 8 up, 8 in 2026-03-09T20:34:35.897 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:35 vm09 ceph-mon[54524]: pgmap v1043: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:34:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:35 vm05 ceph-mon[61345]: pgmap v1043: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:34:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:35 vm05 ceph-mon[51870]: pgmap v1043: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:34:36.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:34:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:34:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:37.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:37 vm05 ceph-mon[61345]: pgmap v1044: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 873 B/s rd, 2.7 KiB/s wr, 1 op/s 2026-03-09T20:34:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:37 vm05 ceph-mon[51870]: pgmap v1044: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 873 B/s rd, 2.7 KiB/s wr, 1 op/s 2026-03-09T20:34:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:37 vm09 ceph-mon[54524]: pgmap v1044: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 873 B/s rd, 2.7 KiB/s wr, 1 op/s 2026-03-09T20:34:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:34:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:34:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:34:39.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:39 vm09 ceph-mon[54524]: pgmap v1045: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 746 B/s rd, 2.3 KiB/s wr, 1 op/s 2026-03-09T20:34:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:39 vm05 ceph-mon[61345]: pgmap v1045: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 746 B/s rd, 2.3 KiB/s wr, 1 op/s 2026-03-09T20:34:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:39 vm05 ceph-mon[51870]: pgmap v1045: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 746 B/s rd, 2.3 KiB/s wr, 1 op/s 2026-03-09T20:34:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:41 vm05 ceph-mon[61345]: pgmap v1046: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 7.2 KiB/s wr, 2 op/s 2026-03-09T20:34:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:41 vm05 ceph-mon[51870]: pgmap v1046: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 7.2 KiB/s wr, 2 op/s 2026-03-09T20:34:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:41 vm09 ceph-mon[54524]: pgmap v1046: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 7.2 KiB/s wr, 2 op/s 2026-03-09T20:34:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:43 vm05 ceph-mon[61345]: pgmap v1047: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 5.9 KiB/s wr, 1 op/s 2026-03-09T20:34:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:43 vm05 ceph-mon[51870]: pgmap v1047: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 5.9 KiB/s wr, 1 op/s 2026-03-09T20:34:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:43 vm09 ceph-mon[54524]: pgmap v1047: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 5.9 KiB/s wr, 1 op/s 2026-03-09T20:34:45.907 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:45 vm09 ceph-mon[54524]: pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 943 B/s rd, 5.3 KiB/s wr, 1 op/s 2026-03-09T20:34:45.907 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:34:45.907 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:45.907 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-140"}]: dispatch 2026-03-09T20:34:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:45 vm05 ceph-mon[61345]: pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 943 B/s rd, 5.3 KiB/s wr, 1 op/s 2026-03-09T20:34:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:34:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-140"}]: dispatch 2026-03-09T20:34:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:45 vm05 ceph-mon[51870]: pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 943 B/s rd, 5.3 KiB/s wr, 1 op/s 2026-03-09T20:34:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:34:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-140"}]: dispatch 2026-03-09T20:34:46.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:34:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:34:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-140"}]': finished 2026-03-09T20:34:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:46 vm09 ceph-mon[54524]: osdmap e685: 8 total, 8 up, 8 in 2026-03-09T20:34:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:47.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-140"}]': finished 2026-03-09T20:34:47.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:46 vm05 ceph-mon[61345]: osdmap e685: 8 total, 8 up, 8 in 2026-03-09T20:34:47.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:47.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-140"}]': finished 2026-03-09T20:34:47.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:46 vm05 ceph-mon[51870]: osdmap e685: 8 total, 8 up, 8 in 2026-03-09T20:34:47.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:47 vm09 ceph-mon[54524]: pgmap v1050: 268 pgs: 268 active+clean; 4.4 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 2 op/s 2026-03-09T20:34:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:47 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:34:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:47 vm09 ceph-mon[54524]: osdmap e686: 8 total, 8 up, 8 in 2026-03-09T20:34:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:47 vm05 ceph-mon[61345]: pgmap v1050: 268 pgs: 268 active+clean; 4.4 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 2 op/s 2026-03-09T20:34:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:47 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:34:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:47 vm05 ceph-mon[61345]: osdmap e686: 8 total, 8 up, 8 in 2026-03-09T20:34:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:47 vm05 ceph-mon[51870]: pgmap v1050: 268 pgs: 268 active+clean; 4.4 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 2 op/s 2026-03-09T20:34:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:47 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:34:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:47 vm05 ceph-mon[51870]: osdmap e686: 8 total, 8 up, 8 in 2026-03-09T20:34:48.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:48 vm05 ceph-mon[61345]: osdmap e687: 8 total, 8 up, 8 in 2026-03-09T20:34:48.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:48 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:48.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:48 vm05 ceph-mon[51870]: osdmap e687: 8 total, 8 up, 8 in 2026-03-09T20:34:48.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:48 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:34:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:34:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:34:49.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:48 vm09 ceph-mon[54524]: osdmap e687: 8 total, 8 up, 8 in 2026-03-09T20:34:49.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:48 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:34:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:49 vm05 ceph-mon[61345]: pgmap v1053: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:34:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:49 vm05 ceph-mon[61345]: osdmap e688: 8 total, 8 up, 8 in 2026-03-09T20:34:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:49 vm05 ceph-mon[51870]: pgmap v1053: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:34:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:49 vm05 ceph-mon[51870]: osdmap e688: 8 total, 8 up, 8 in 2026-03-09T20:34:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:49 vm09 ceph-mon[54524]: pgmap v1053: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:34:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:34:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:49 vm09 ceph-mon[54524]: osdmap e688: 8 total, 8 up, 8 in 2026-03-09T20:34:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:34:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:50 vm09 ceph-mon[54524]: osdmap e689: 8 total, 8 up, 8 in 2026-03-09T20:34:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-142"}]: dispatch 2026-03-09T20:34:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-142"}]': finished 2026-03-09T20:34:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:50 vm09 ceph-mon[54524]: osdmap e690: 8 total, 8 up, 8 in 2026-03-09T20:34:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:50 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-142", "mode": "writeback"}]: dispatch 2026-03-09T20:34:51.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:51.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:50 vm05 ceph-mon[61345]: osdmap e689: 8 total, 8 up, 8 in 2026-03-09T20:34:51.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-142"}]: dispatch 2026-03-09T20:34:51.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-142"}]': finished 2026-03-09T20:34:51.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:50 vm05 ceph-mon[61345]: osdmap e690: 8 total, 8 up, 8 in 2026-03-09T20:34:51.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:50 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-142", "mode": "writeback"}]: dispatch 2026-03-09T20:34:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:34:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:50 vm05 ceph-mon[51870]: osdmap e689: 8 total, 8 up, 8 in 2026-03-09T20:34:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-142"}]: dispatch 2026-03-09T20:34:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-142"}]': finished 2026-03-09T20:34:51.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:50 vm05 ceph-mon[51870]: osdmap e690: 8 total, 8 up, 8 in 2026-03-09T20:34:51.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:50 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-142", "mode": "writeback"}]: dispatch 2026-03-09T20:34:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:51 vm05 ceph-mon[61345]: pgmap v1056: 268 pgs: 11 unknown, 257 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:51 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:34:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-142", "mode": "writeback"}]': finished 2026-03-09T20:34:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:51 vm05 ceph-mon[61345]: osdmap e691: 8 total, 8 up, 8 in 2026-03-09T20:34:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:34:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:51 vm05 ceph-mon[51870]: pgmap v1056: 268 pgs: 11 unknown, 257 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:51 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:34:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-142", "mode": "writeback"}]': finished 2026-03-09T20:34:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:51 vm05 ceph-mon[51870]: osdmap e691: 8 total, 8 up, 8 in 2026-03-09T20:34:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:34:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:51 vm09 ceph-mon[54524]: pgmap v1056: 268 pgs: 11 unknown, 257 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:51 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:34:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-142", "mode": "writeback"}]': finished 2026-03-09T20:34:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:51 vm09 ceph-mon[54524]: osdmap e691: 8 total, 8 up, 8 in 2026-03-09T20:34:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:34:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:53 vm05 ceph-mon[61345]: pgmap v1059: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T20:34:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:34:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:53 vm05 ceph-mon[61345]: osdmap e692: 8 total, 8 up, 8 in 2026-03-09T20:34:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:53 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:34:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:53 vm05 ceph-mon[51870]: pgmap v1059: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T20:34:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:34:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:53 vm05 ceph-mon[51870]: osdmap e692: 8 total, 8 up, 8 in 2026-03-09T20:34:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:53 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:34:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:53 vm09 ceph-mon[54524]: pgmap v1059: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-09T20:34:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:34:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:53 vm09 ceph-mon[54524]: osdmap e692: 8 total, 8 up, 8 in 2026-03-09T20:34:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:53 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:34:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:34:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:54 vm05 ceph-mon[51870]: osdmap e693: 8 total, 8 up, 8 in 2026-03-09T20:34:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:34:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:54 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:34:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:34:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:54 vm05 ceph-mon[51870]: osdmap e694: 8 total, 8 up, 8 in 2026-03-09T20:34:55.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:34:55.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:54 vm05 ceph-mon[61345]: osdmap e693: 8 total, 8 up, 8 in 2026-03-09T20:34:55.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:34:55.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:54 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:34:55.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:34:55.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:54 vm05 ceph-mon[61345]: osdmap e694: 8 total, 8 up, 8 in 2026-03-09T20:34:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:34:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:54 vm09 ceph-mon[54524]: osdmap e693: 8 total, 8 up, 8 in 2026-03-09T20:34:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:34:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:54 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:34:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:34:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:54 vm09 ceph-mon[54524]: osdmap e694: 8 total, 8 up, 8 in 2026-03-09T20:34:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[61345]: pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[61345]: osdmap e695: 8 total, 8 up, 8 in 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[51870]: pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[51870]: osdmap e695: 8 total, 8 up, 8 in 2026-03-09T20:34:56.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T20:34:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:55 vm09 ceph-mon[54524]: pgmap v1062: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-09T20:34:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:34:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:55 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:34:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:55 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:34:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:55 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:34:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:55 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:34:56.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T20:34:56.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:55 vm09 ceph-mon[54524]: osdmap e695: 8 total, 8 up, 8 in 2026-03-09T20:34:56.274 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-09T20:34:56.274 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:34:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:34:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:56 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:34:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T20:34:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:56 vm05 ceph-mon[61345]: osdmap e696: 8 total, 8 up, 8 in 2026-03-09T20:34:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:56 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T20:34:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:56 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:34:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T20:34:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:56 vm05 ceph-mon[51870]: osdmap e696: 8 total, 8 up, 8 in 2026-03-09T20:34:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:56 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T20:34:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:34:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:56 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:34:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-09T20:34:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:56 vm09 ceph-mon[54524]: osdmap e696: 8 total, 8 up, 8 in 2026-03-09T20:34:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:56 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-09T20:34:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:57 vm05 ceph-mon[61345]: pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T20:34:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:57 vm05 ceph-mon[61345]: osdmap e697: 8 total, 8 up, 8 in 2026-03-09T20:34:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:57 vm05 ceph-mon[51870]: pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T20:34:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:57 vm05 ceph-mon[51870]: osdmap e697: 8 total, 8 up, 8 in 2026-03-09T20:34:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:57 vm09 ceph-mon[54524]: pgmap v1065: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:34:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-09T20:34:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:57 vm09 ceph-mon[54524]: osdmap e697: 8 total, 8 up, 8 in 2026-03-09T20:34:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:34:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:34:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:34:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:34:59.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:59 vm05 ceph-mon[51870]: pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:35:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:59 vm05 ceph-mon[51870]: osdmap e698: 8 total, 8 up, 8 in 2026-03-09T20:35:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:34:59 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142"}]: dispatch 2026-03-09T20:35:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:59 vm05 ceph-mon[61345]: pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:35:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:59 vm05 ceph-mon[61345]: osdmap e698: 8 total, 8 up, 8 in 2026-03-09T20:35:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:34:59 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142"}]: dispatch 2026-03-09T20:35:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:59 vm09 ceph-mon[54524]: pgmap v1068: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:35:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:59 vm09 ceph-mon[54524]: osdmap e698: 8 total, 8 up, 8 in 2026-03-09T20:35:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:34:59 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142"}]: dispatch 2026-03-09T20:35:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142"}]': finished 2026-03-09T20:35:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:00 vm05 ceph-mon[51870]: osdmap e699: 8 total, 8 up, 8 in 2026-03-09T20:35:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:00 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142"}]: dispatch 2026-03-09T20:35:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:35:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142"}]': finished 2026-03-09T20:35:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:00 vm05 ceph-mon[61345]: osdmap e699: 8 total, 8 up, 8 in 2026-03-09T20:35:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:00 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142"}]: dispatch 2026-03-09T20:35:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:35:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142"}]': finished 2026-03-09T20:35:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:00 vm09 ceph-mon[54524]: osdmap e699: 8 total, 8 up, 8 in 2026-03-09T20:35:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:00 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-142"}]: dispatch 2026-03-09T20:35:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:35:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:01 vm05 ceph-mon[51870]: pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:01 vm05 ceph-mon[51870]: osdmap e700: 8 total, 8 up, 8 in 2026-03-09T20:35:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:01 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:01 vm05 ceph-mon[61345]: pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:01 vm05 ceph-mon[61345]: osdmap e700: 8 total, 8 up, 8 in 2026-03-09T20:35:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:01 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:01 vm09 ceph-mon[54524]: pgmap v1071: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:01 vm09 ceph-mon[54524]: osdmap e700: 8 total, 8 up, 8 in 2026-03-09T20:35:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:01 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:03.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:03 vm09 ceph-mon[54524]: osdmap e701: 8 total, 8 up, 8 in 2026-03-09T20:35:03.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:03 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:03 vm05 ceph-mon[51870]: osdmap e701: 8 total, 8 up, 8 in 2026-03-09T20:35:03.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:03 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:03 vm05 ceph-mon[61345]: osdmap e701: 8 total, 8 up, 8 in 2026-03-09T20:35:03.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:03 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:04 vm05 ceph-mon[51870]: pgmap v1074: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T20:35:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:04 vm05 ceph-mon[51870]: osdmap e702: 8 total, 8 up, 8 in 2026-03-09T20:35:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:04 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:35:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:04 vm05 ceph-mon[61345]: pgmap v1074: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T20:35:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:04 vm05 ceph-mon[61345]: osdmap e702: 8 total, 8 up, 8 in 2026-03-09T20:35:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:04 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:35:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:04 vm09 ceph-mon[54524]: pgmap v1074: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T20:35:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:04 vm09 ceph-mon[54524]: osdmap e702: 8 total, 8 up, 8 in 2026-03-09T20:35:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:04 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:35:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:35:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:05 vm05 ceph-mon[51870]: osdmap e703: 8 total, 8 up, 8 in 2026-03-09T20:35:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-144"}]: dispatch 2026-03-09T20:35:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:05 vm05 ceph-mon[51870]: pgmap v1077: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T20:35:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:05 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-144"}]': finished 2026-03-09T20:35:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:05 vm05 ceph-mon[51870]: osdmap e704: 8 total, 8 up, 8 in 2026-03-09T20:35:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:35:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:05 vm05 ceph-mon[61345]: osdmap e703: 8 total, 8 up, 8 in 2026-03-09T20:35:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-144"}]: dispatch 2026-03-09T20:35:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:05 vm05 ceph-mon[61345]: pgmap v1077: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T20:35:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:05 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-144"}]': finished 2026-03-09T20:35:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:05 vm05 ceph-mon[61345]: osdmap e704: 8 total, 8 up, 8 in 2026-03-09T20:35:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:35:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:05 vm09 ceph-mon[54524]: osdmap e703: 8 total, 8 up, 8 in 2026-03-09T20:35:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-144"}]: dispatch 2026-03-09T20:35:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:05 vm09 ceph-mon[54524]: pgmap v1077: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-09T20:35:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:05 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-144"}]': finished 2026-03-09T20:35:05.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:05 vm09 ceph-mon[54524]: osdmap e704: 8 total, 8 up, 8 in 2026-03-09T20:35:06.260 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:35:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:35:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:06 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-144", "mode": "readproxy"}]: dispatch 2026-03-09T20:35:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:06 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:06 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-144", "mode": "readproxy"}]: dispatch 2026-03-09T20:35:06.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:06 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:06 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-144", "mode": "readproxy"}]: dispatch 2026-03-09T20:35:06.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:06 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:07 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:35:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:07 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:07 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-144", "mode": "readproxy"}]': finished 2026-03-09T20:35:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:07 vm09 ceph-mon[54524]: osdmap e705: 8 total, 8 up, 8 in 2026-03-09T20:35:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:07 vm09 ceph-mon[54524]: pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:35:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:07 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:35:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:07 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:07 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-144", "mode": "readproxy"}]': finished 2026-03-09T20:35:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:07 vm05 ceph-mon[51870]: osdmap e705: 8 total, 8 up, 8 in 2026-03-09T20:35:07.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:07 vm05 ceph-mon[51870]: pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:35:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:07 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:35:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:07 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:07 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-144", "mode": "readproxy"}]': finished 2026-03-09T20:35:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:07 vm05 ceph-mon[61345]: osdmap e705: 8 total, 8 up, 8 in 2026-03-09T20:35:07.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:07 vm05 ceph-mon[61345]: pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:35:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:35:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:35:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:35:09.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:09 vm09 ceph-mon[54524]: pgmap v1081: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 949 B/s rd, 189 B/s wr, 1 op/s 2026-03-09T20:35:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:09 vm05 ceph-mon[61345]: pgmap v1081: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 949 B/s rd, 189 B/s wr, 1 op/s 2026-03-09T20:35:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:09 vm05 ceph-mon[51870]: pgmap v1081: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 949 B/s rd, 189 B/s wr, 1 op/s 2026-03-09T20:35:11.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:11 vm09 ceph-mon[54524]: pgmap v1082: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 160 B/s wr, 1 op/s 2026-03-09T20:35:11.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:11 vm05 ceph-mon[61345]: pgmap v1082: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 160 B/s wr, 1 op/s 2026-03-09T20:35:11.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:11 vm05 ceph-mon[51870]: pgmap v1082: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 160 B/s wr, 1 op/s 2026-03-09T20:35:13.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:13 vm09 ceph-mon[54524]: pgmap v1083: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 127 B/s wr, 1 op/s 2026-03-09T20:35:13.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:13 vm05 ceph-mon[61345]: pgmap v1083: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 127 B/s wr, 1 op/s 2026-03-09T20:35:13.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:13 vm05 ceph-mon[51870]: pgmap v1083: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 127 B/s wr, 1 op/s 2026-03-09T20:35:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:15 vm05 ceph-mon[61345]: pgmap v1084: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 109 B/s wr, 1 op/s 2026-03-09T20:35:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:15 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:35:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:35:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:15 vm05 ceph-mon[51870]: pgmap v1084: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 109 B/s wr, 1 op/s 2026-03-09T20:35:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:15 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:35:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:35:15.931 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:15 vm09 ceph-mon[54524]: pgmap v1084: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 109 B/s wr, 1 op/s 2026-03-09T20:35:15.931 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:15 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:35:15.931 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:35:16.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:35:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:35:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:16 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:16.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:16 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:16 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:16.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:16 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:16 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:16 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:17.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:17 vm05 ceph-mon[61345]: pgmap v1085: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T20:35:17.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:17.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:17 vm05 ceph-mon[61345]: osdmap e706: 8 total, 8 up, 8 in 2026-03-09T20:35:17.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:17 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144"}]: dispatch 2026-03-09T20:35:17.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:17 vm05 ceph-mon[51870]: pgmap v1085: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T20:35:17.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:17.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:17 vm05 ceph-mon[51870]: osdmap e706: 8 total, 8 up, 8 in 2026-03-09T20:35:17.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:17 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144"}]: dispatch 2026-03-09T20:35:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:17 vm09 ceph-mon[54524]: pgmap v1085: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T20:35:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:17 vm09 ceph-mon[54524]: osdmap e706: 8 total, 8 up, 8 in 2026-03-09T20:35:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:17 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144"}]: dispatch 2026-03-09T20:35:18.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:18 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:35:18.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144"}]': finished 2026-03-09T20:35:18.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:18 vm05 ceph-mon[61345]: osdmap e707: 8 total, 8 up, 8 in 2026-03-09T20:35:18.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:18.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:18 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144"}]: dispatch 2026-03-09T20:35:18.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:18 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:35:18.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144"}]': finished 2026-03-09T20:35:18.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:18 vm05 ceph-mon[51870]: osdmap e707: 8 total, 8 up, 8 in 2026-03-09T20:35:18.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:18.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:18 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144"}]: dispatch 2026-03-09T20:35:18.911 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:35:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:35:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:35:19.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:18 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:35:19.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144"}]': finished 2026-03-09T20:35:19.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:18 vm09 ceph-mon[54524]: osdmap e707: 8 total, 8 up, 8 in 2026-03-09T20:35:19.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:19.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:18 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-144"}]: dispatch 2026-03-09T20:35:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:19 vm05 ceph-mon[61345]: pgmap v1088: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:35:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:19 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:19 vm05 ceph-mon[61345]: osdmap e708: 8 total, 8 up, 8 in 2026-03-09T20:35:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:19 vm05 ceph-mon[51870]: pgmap v1088: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:35:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:19 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:19 vm05 ceph-mon[51870]: osdmap e708: 8 total, 8 up, 8 in 2026-03-09T20:35:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:19 vm09 ceph-mon[54524]: pgmap v1088: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:35:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:19 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:19 vm09 ceph-mon[54524]: osdmap e708: 8 total, 8 up, 8 in 2026-03-09T20:35:20.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:20 vm05 ceph-mon[61345]: osdmap e709: 8 total, 8 up, 8 in 2026-03-09T20:35:20.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:20 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:20.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:20 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:20.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:20 vm05 ceph-mon[61345]: osdmap e710: 8 total, 8 up, 8 in 2026-03-09T20:35:20.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:20 vm05 ceph-mon[51870]: osdmap e709: 8 total, 8 up, 8 in 2026-03-09T20:35:20.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:20 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:20.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:20 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:20.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:20 vm05 ceph-mon[51870]: osdmap e710: 8 total, 8 up, 8 in 2026-03-09T20:35:21.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:20 vm09 ceph-mon[54524]: osdmap e709: 8 total, 8 up, 8 in 2026-03-09T20:35:21.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:20 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:21.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:20 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:21.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:20 vm09 ceph-mon[54524]: osdmap e710: 8 total, 8 up, 8 in 2026-03-09T20:35:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:21 vm05 ceph-mon[61345]: pgmap v1091: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:35:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:35:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:35:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:21 vm05 ceph-mon[61345]: osdmap e711: 8 total, 8 up, 8 in 2026-03-09T20:35:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:21 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-146"}]: dispatch 2026-03-09T20:35:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:21 vm05 ceph-mon[51870]: pgmap v1091: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:35:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:35:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:35:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:21 vm05 ceph-mon[51870]: osdmap e711: 8 total, 8 up, 8 in 2026-03-09T20:35:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:21 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-146"}]: dispatch 2026-03-09T20:35:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:21 vm09 ceph-mon[54524]: pgmap v1091: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:35:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:35:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:35:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:21 vm09 ceph-mon[54524]: osdmap e711: 8 total, 8 up, 8 in 2026-03-09T20:35:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:21 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-146"}]: dispatch 2026-03-09T20:35:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:23 vm05 ceph-mon[61345]: pgmap v1094: 268 pgs: 7 creating+activating, 18 creating+peering, 243 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:35:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-146"}]': finished 2026-03-09T20:35:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:23 vm05 ceph-mon[61345]: osdmap e712: 8 total, 8 up, 8 in 2026-03-09T20:35:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:23 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-146", "mode": "writeback"}]: dispatch 2026-03-09T20:35:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:23 vm05 ceph-mon[51870]: pgmap v1094: 268 pgs: 7 creating+activating, 18 creating+peering, 243 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:35:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-146"}]': finished 2026-03-09T20:35:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:23 vm05 ceph-mon[51870]: osdmap e712: 8 total, 8 up, 8 in 2026-03-09T20:35:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:23 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-146", "mode": "writeback"}]: dispatch 2026-03-09T20:35:24.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:23 vm09 ceph-mon[54524]: pgmap v1094: 268 pgs: 7 creating+activating, 18 creating+peering, 243 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:35:24.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-94573-111", "overlaypool": "test-rados-api-vm05-94573-146"}]': finished 2026-03-09T20:35:24.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:23 vm09 ceph-mon[54524]: osdmap e712: 8 total, 8 up, 8 in 2026-03-09T20:35:24.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:23 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-146", "mode": "writeback"}]: dispatch 2026-03-09T20:35:24.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:24 vm05 ceph-mon[61345]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:35:24.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:24 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-146", "mode": "writeback"}]': finished 2026-03-09T20:35:24.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:24 vm05 ceph-mon[61345]: osdmap e713: 8 total, 8 up, 8 in 2026-03-09T20:35:24.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:24 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:35:24.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:24 vm05 ceph-mon[51870]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:35:24.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:24 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-146", "mode": "writeback"}]': finished 2026-03-09T20:35:24.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:24 vm05 ceph-mon[51870]: osdmap e713: 8 total, 8 up, 8 in 2026-03-09T20:35:24.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:24 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:35:25.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:24 vm09 ceph-mon[54524]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-09T20:35:25.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:24 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-94573-146", "mode": "writeback"}]': finished 2026-03-09T20:35:25.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:24 vm09 ceph-mon[54524]: osdmap e713: 8 total, 8 up, 8 in 2026-03-09T20:35:25.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:24 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-09T20:35:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:25 vm05 ceph-mon[51870]: pgmap v1097: 268 pgs: 7 creating+activating, 18 creating+peering, 243 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:35:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:35:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:25 vm05 ceph-mon[51870]: osdmap e714: 8 total, 8 up, 8 in 2026-03-09T20:35:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:25 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:35:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:25 vm05 ceph-mon[61345]: pgmap v1097: 268 pgs: 7 creating+activating, 18 creating+peering, 243 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:35:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:35:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:25 vm05 ceph-mon[61345]: osdmap e714: 8 total, 8 up, 8 in 2026-03-09T20:35:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:25 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:35:25.938 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:25 vm09 ceph-mon[54524]: pgmap v1097: 268 pgs: 7 creating+activating, 18 creating+peering, 243 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:35:25.938 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_count","val": "2"}]': finished 2026-03-09T20:35:25.938 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:25 vm09 ceph-mon[54524]: osdmap e714: 8 total, 8 up, 8 in 2026-03-09T20:35:25.938 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:25 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-09T20:35:26.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:35:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:35:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:35:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:26 vm09 ceph-mon[54524]: osdmap e715: 8 total, 8 up, 8 in 2026-03-09T20:35:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:35:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:26 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:26 vm09 ceph-mon[54524]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:26 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:35:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:35:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:26 vm09 ceph-mon[54524]: osdmap e716: 8 total, 8 up, 8 in 2026-03-09T20:35:27.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:26 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[61345]: osdmap e715: 8 total, 8 up, 8 in 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[61345]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[61345]: osdmap e716: 8 total, 8 up, 8 in 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_period","val": "600"}]': finished 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[51870]: osdmap e715: 8 total, 8 up, 8 in 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[51870]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[51870]: osdmap e716: 8 total, 8 up, 8 in 2026-03-09T20:35:27.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:26 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-09T20:35:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:27 vm09 ceph-mon[54524]: pgmap v1100: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T20:35:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T20:35:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:27 vm09 ceph-mon[54524]: osdmap e717: 8 total, 8 up, 8 in 2026-03-09T20:35:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:27 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T20:35:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:27 vm05 ceph-mon[61345]: pgmap v1100: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T20:35:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T20:35:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:27 vm05 ceph-mon[61345]: osdmap e717: 8 total, 8 up, 8 in 2026-03-09T20:35:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:27 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T20:35:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:27 vm05 ceph-mon[51870]: pgmap v1100: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T20:35:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-09T20:35:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:27 vm05 ceph-mon[51870]: osdmap e717: 8 total, 8 up, 8 in 2026-03-09T20:35:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:27 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-09T20:35:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:35:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:35:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:35:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:29 vm05 ceph-mon[61345]: pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T20:35:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:29 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T20:35:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:29 vm05 ceph-mon[61345]: osdmap e718: 8 total, 8 up, 8 in 2026-03-09T20:35:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:29 vm05 ceph-mon[51870]: pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T20:35:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:29 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T20:35:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:29 vm05 ceph-mon[51870]: osdmap e718: 8 total, 8 up, 8 in 2026-03-09T20:35:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:29 vm09 ceph-mon[54524]: pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-09T20:35:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:29 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-146","var": "target_max_objects","val": "1"}]': finished 2026-03-09T20:35:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:29 vm09 ceph-mon[54524]: osdmap e718: 8 total, 8 up, 8 in 2026-03-09T20:35:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:31 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:35:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:31 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:35:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:31 vm05 ceph-mon[61345]: pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:31 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:35:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:31 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:35:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:31 vm05 ceph-mon[51870]: pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:31 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:35:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:31 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:35:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:31 vm09 ceph-mon[54524]: pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:32.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:32 vm05 ceph-mon[61345]: Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T20:35:32.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:32 vm05 ceph-mon[51870]: Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T20:35:32.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:32 vm09 ceph-mon[54524]: Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-09T20:35:33.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:33 vm05 ceph-mon[61345]: pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:33.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:33 vm05 ceph-mon[51870]: pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:33 vm09 ceph-mon[54524]: pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:35.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:35 vm09 ceph-mon[54524]: pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 661 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T20:35:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:35 vm05 ceph-mon[61345]: pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 661 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T20:35:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:35 vm05 ceph-mon[51870]: pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 661 B/s rd, 0 B/s wr, 0 op/s 2026-03-09T20:35:36.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:35:35 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:35:36.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:36 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:36.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:36 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:36.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:36 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:37 vm09 ceph-mon[54524]: pgmap v1108: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:37 vm05 ceph-mon[61345]: pgmap v1108: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:37 vm05 ceph-mon[51870]: pgmap v1108: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:35:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:35:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:35:39.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:39 vm09 ceph-mon[54524]: pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:39.774 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:39 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:39 vm05 ceph-mon[61345]: pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:39 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:39 vm05 ceph-mon[51870]: pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:39 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:40.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:40.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:40 vm09 ceph-mon[54524]: osdmap e719: 8 total, 8 up, 8 in 2026-03-09T20:35:40.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:40 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146"}]: dispatch 2026-03-09T20:35:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:40 vm05 ceph-mon[61345]: osdmap e719: 8 total, 8 up, 8 in 2026-03-09T20:35:40.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:40 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146"}]: dispatch 2026-03-09T20:35:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:40 vm05 ceph-mon[51870]: osdmap e719: 8 total, 8 up, 8 in 2026-03-09T20:35:40.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:40 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146"}]: dispatch 2026-03-09T20:35:41.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:41 vm09 ceph-mon[54524]: pgmap v1111: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:41.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146"}]': finished 2026-03-09T20:35:41.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:41 vm09 ceph-mon[54524]: osdmap e720: 8 total, 8 up, 8 in 2026-03-09T20:35:41.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:41.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:41 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146"}]: dispatch 2026-03-09T20:35:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:41 vm05 ceph-mon[61345]: pgmap v1111: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146"}]': finished 2026-03-09T20:35:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:41 vm05 ceph-mon[61345]: osdmap e720: 8 total, 8 up, 8 in 2026-03-09T20:35:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:41 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146"}]: dispatch 2026-03-09T20:35:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:41 vm05 ceph-mon[51870]: pgmap v1111: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-09T20:35:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146"}]': finished 2026-03-09T20:35:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:41 vm05 ceph-mon[51870]: osdmap e720: 8 total, 8 up, 8 in 2026-03-09T20:35:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:41 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-146"}]: dispatch 2026-03-09T20:35:42.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:42 vm09 ceph-mon[54524]: osdmap e721: 8 total, 8 up, 8 in 2026-03-09T20:35:42.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:42 vm05 ceph-mon[61345]: osdmap e721: 8 total, 8 up, 8 in 2026-03-09T20:35:42.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:42 vm05 ceph-mon[51870]: osdmap e721: 8 total, 8 up, 8 in 2026-03-09T20:35:43.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:43 vm09 ceph-mon[54524]: pgmap v1114: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T20:35:43.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:43 vm09 ceph-mon[54524]: Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T20:35:43.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:43 vm09 ceph-mon[54524]: osdmap e722: 8 total, 8 up, 8 in 2026-03-09T20:35:43.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:43 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:43 vm05 ceph-mon[61345]: pgmap v1114: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T20:35:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:43 vm05 ceph-mon[61345]: Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T20:35:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:43 vm05 ceph-mon[61345]: osdmap e722: 8 total, 8 up, 8 in 2026-03-09T20:35:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:43 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:43 vm05 ceph-mon[51870]: pgmap v1114: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T20:35:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:43 vm05 ceph-mon[51870]: Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-09T20:35:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:43 vm05 ceph-mon[51870]: osdmap e722: 8 total, 8 up, 8 in 2026-03-09T20:35:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:43 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:44.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:44.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:44 vm09 ceph-mon[54524]: osdmap e723: 8 total, 8 up, 8 in 2026-03-09T20:35:44.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:44 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:35:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:44 vm05 ceph-mon[61345]: osdmap e723: 8 total, 8 up, 8 in 2026-03-09T20:35:44.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:44 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:35:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:44 vm05 ceph-mon[51870]: osdmap e723: 8 total, 8 up, 8 in 2026-03-09T20:35:44.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:44 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-09T20:35:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:45 vm09 ceph-mon[54524]: pgmap v1117: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:35:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:35:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:45 vm09 ceph-mon[54524]: osdmap e724: 8 total, 8 up, 8 in 2026-03-09T20:35:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:45 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148"}]: dispatch 2026-03-09T20:35:45.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:35:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:45 vm05 ceph-mon[61345]: pgmap v1117: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:35:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:35:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:45 vm05 ceph-mon[61345]: osdmap e724: 8 total, 8 up, 8 in 2026-03-09T20:35:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:45 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148"}]: dispatch 2026-03-09T20:35:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:35:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:45 vm05 ceph-mon[51870]: pgmap v1117: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:35:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-09T20:35:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:45 vm05 ceph-mon[51870]: osdmap e724: 8 total, 8 up, 8 in 2026-03-09T20:35:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:45 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148"}]: dispatch 2026-03-09T20:35:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:35:46.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:35:45 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:35:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148"}]': finished 2026-03-09T20:35:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:46 vm09 ceph-mon[54524]: osdmap e725: 8 total, 8 up, 8 in 2026-03-09T20:35:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:46 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148"}]: dispatch 2026-03-09T20:35:46.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:46 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148"}]': finished 2026-03-09T20:35:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:46 vm05 ceph-mon[61345]: osdmap e725: 8 total, 8 up, 8 in 2026-03-09T20:35:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:46 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148"}]: dispatch 2026-03-09T20:35:46.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:46 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148"}]': finished 2026-03-09T20:35:46.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:46 vm05 ceph-mon[51870]: osdmap e725: 8 total, 8 up, 8 in 2026-03-09T20:35:46.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:46.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:46 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-148"}]: dispatch 2026-03-09T20:35:46.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:46 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:47 vm09 ceph-mon[54524]: pgmap v1120: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:35:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:47 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:47 vm09 ceph-mon[54524]: osdmap e726: 8 total, 8 up, 8 in 2026-03-09T20:35:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:47 vm05 ceph-mon[61345]: pgmap v1120: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:35:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:47 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:47 vm05 ceph-mon[61345]: osdmap e726: 8 total, 8 up, 8 in 2026-03-09T20:35:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:47 vm05 ceph-mon[51870]: pgmap v1120: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-09T20:35:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:47 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:47 vm05 ceph-mon[51870]: osdmap e726: 8 total, 8 up, 8 in 2026-03-09T20:35:48.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:48 vm05 ceph-mon[61345]: osdmap e727: 8 total, 8 up, 8 in 2026-03-09T20:35:48.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:48 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:48.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:48 vm05 ceph-mon[51870]: osdmap e727: 8 total, 8 up, 8 in 2026-03-09T20:35:48.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:48 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:35:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:35:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:35:49.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:48 vm09 ceph-mon[54524]: osdmap e727: 8 total, 8 up, 8 in 2026-03-09T20:35:49.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:48 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:49 vm05 ceph-mon[61345]: pgmap v1123: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:35:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:49 vm05 ceph-mon[61345]: osdmap e728: 8 total, 8 up, 8 in 2026-03-09T20:35:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:49 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-150"}]: dispatch 2026-03-09T20:35:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:49 vm05 ceph-mon[51870]: pgmap v1123: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:35:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:49 vm05 ceph-mon[51870]: osdmap e728: 8 total, 8 up, 8 in 2026-03-09T20:35:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:49 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-150"}]: dispatch 2026-03-09T20:35:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:49 vm09 ceph-mon[54524]: pgmap v1123: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:35:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:49 vm09 ceph-mon[54524]: osdmap e728: 8 total, 8 up, 8 in 2026-03-09T20:35:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:49 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-150"}]: dispatch 2026-03-09T20:35:50.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:50 vm05 ceph-mon[61345]: osdmap e729: 8 total, 8 up, 8 in 2026-03-09T20:35:50.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:50 vm05 ceph-mon[51870]: osdmap e729: 8 total, 8 up, 8 in 2026-03-09T20:35:51.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:50 vm09 ceph-mon[54524]: osdmap e729: 8 total, 8 up, 8 in 2026-03-09T20:35:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:51 vm09 ceph-mon[54524]: pgmap v1126: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:35:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:51 vm09 ceph-mon[54524]: osdmap e730: 8 total, 8 up, 8 in 2026-03-09T20:35:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:51 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:51 vm05 ceph-mon[61345]: pgmap v1126: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:35:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:51 vm05 ceph-mon[61345]: osdmap e730: 8 total, 8 up, 8 in 2026-03-09T20:35:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:51 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:51 vm05 ceph-mon[51870]: pgmap v1126: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:35:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:51 vm05 ceph-mon[51870]: osdmap e730: 8 total, 8 up, 8 in 2026-03-09T20:35:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:51 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:52 vm09 ceph-mon[54524]: osdmap e731: 8 total, 8 up, 8 in 2026-03-09T20:35:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:52 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-152"}]: dispatch 2026-03-09T20:35:53.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:53.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:52 vm05 ceph-mon[61345]: osdmap e731: 8 total, 8 up, 8 in 2026-03-09T20:35:53.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:53.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:52 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-152"}]: dispatch 2026-03-09T20:35:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:52 vm05 ceph-mon[51870]: osdmap e731: 8 total, 8 up, 8 in 2026-03-09T20:35:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:52 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-152"}]: dispatch 2026-03-09T20:35:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:53 vm09 ceph-mon[54524]: pgmap v1129: 268 pgs: 12 creating+peering, 20 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:35:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:53 vm09 ceph-mon[54524]: osdmap e732: 8 total, 8 up, 8 in 2026-03-09T20:35:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:53 vm05 ceph-mon[61345]: pgmap v1129: 268 pgs: 12 creating+peering, 20 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:35:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:53 vm05 ceph-mon[61345]: osdmap e732: 8 total, 8 up, 8 in 2026-03-09T20:35:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:53 vm05 ceph-mon[51870]: pgmap v1129: 268 pgs: 12 creating+peering, 20 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-09T20:35:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:53 vm05 ceph-mon[51870]: osdmap e732: 8 total, 8 up, 8 in 2026-03-09T20:35:55.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:54 vm05 ceph-mon[61345]: osdmap e733: 8 total, 8 up, 8 in 2026-03-09T20:35:55.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:54 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:54 vm05 ceph-mon[51870]: osdmap e733: 8 total, 8 up, 8 in 2026-03-09T20:35:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:54 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:54 vm09 ceph-mon[54524]: osdmap e733: 8 total, 8 up, 8 in 2026-03-09T20:35:55.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:54 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[61345]: pgmap v1132: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[61345]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[61345]: osdmap e734: 8 total, 8 up, 8 in 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-111","var": "dedup_tier","val": "test-rados-api-vm05-94573-154"}]: dispatch 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-154"}]: dispatch 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[51870]: pgmap v1132: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[51870]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[51870]: osdmap e734: 8 total, 8 up, 8 in 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-111","var": "dedup_tier","val": "test-rados-api-vm05-94573-154"}]: dispatch 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-154"}]: dispatch 2026-03-09T20:35:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:35:56.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:35:56.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:35:56.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:55 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:35:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:55 vm09 ceph-mon[54524]: pgmap v1132: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 255 B/s wr, 1 op/s 2026-03-09T20:35:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:55 vm09 ceph-mon[54524]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:35:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-94573-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-09T20:35:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:55 vm09 ceph-mon[54524]: osdmap e734: 8 total, 8 up, 8 in 2026-03-09T20:35:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-94573-111","var": "dedup_tier","val": "test-rados-api-vm05-94573-154"}]: dispatch 2026-03-09T20:35:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:55 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-94573-111", "tierpool": "test-rados-api-vm05-94573-154"}]: dispatch 2026-03-09T20:35:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:55 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:35:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:55 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:35:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:55 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:35:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:55 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:35:56.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:35:55 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:35:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:56 vm05 ceph-mon[61345]: osdmap e735: 8 total, 8 up, 8 in 2026-03-09T20:35:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:56 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:56 vm05 ceph-mon[51870]: osdmap e735: 8 total, 8 up, 8 in 2026-03-09T20:35:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:56 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:56 vm09 ceph-mon[54524]: osdmap e735: 8 total, 8 up, 8 in 2026-03-09T20:35:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:56 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:35:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:57 vm05 ceph-mon[61345]: pgmap v1135: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T20:35:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:57 vm05 ceph-mon[61345]: osdmap e736: 8 total, 8 up, 8 in 2026-03-09T20:35:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:57 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:57 vm05 ceph-mon[51870]: pgmap v1135: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T20:35:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:57 vm05 ceph-mon[51870]: osdmap e736: 8 total, 8 up, 8 in 2026-03-09T20:35:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:57 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:57 vm09 ceph-mon[54524]: pgmap v1135: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-09T20:35:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:57 vm09 ceph-mon[54524]: osdmap e736: 8 total, 8 up, 8 in 2026-03-09T20:35:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:57 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TryFlush (8266 ms) 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FailedFlush 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FailedFlush (11641 ms) 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Flush 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Flush (8120 ms) 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FlushSnap 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FlushSnap (13288 ms) 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FlushTryFlushRaces 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FlushTryFlushRaces (8042 ms) 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TryFlushReadRace 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TryFlushReadRace (7563 ms) 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.HitSetRead 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: hmm, no HitSet yet 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: ok, hit_set contains 329:602f83fe:::foo:head 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.HitSetRead (9148 ms) 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.HitSetTrim 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773088474,0 2026-03-09T20:35:58.858 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: first is 1773088474 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773088474,0 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773088474,0 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773088474,1773088476,1773088477,0 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773088474,1773088476,1773088477,0 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773088474,1773088476,1773088477,0 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773088474,1773088476,1773088477,1773088479,1773088480,0 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773088474,1773088476,1773088477,1773088479,1773088480,0 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773088474,1773088476,1773088477,1773088479,1773088480,0 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773088474,1773088476,1773088477,1773088479,1773088480,1773088482,1773088483,0 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773088474,1773088476,1773088477,1773088479,1773088480,1773088482,1773088483,0 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773088474,1773088476,1773088477,1773088479,1773088480,1773088482,1773088483,0 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773088477,1773088479,1773088480,1773088482,1773088483,1773088485,1773088486,0 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: first now 1773088477, trimmed 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.HitSetTrim (20242 ms) 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteOn2ndRead 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: foo0 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: verifying foo0 is eventually promoted 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteOn2ndRead (14191 ms) 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.ProxyRead 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.ProxyRead (17684 ms) 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.CachePin 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.CachePin (22914 ms) 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.SetRedirectRead 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.SetRedirectRead (5058 ms) 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.SetChunkRead 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.SetChunkRead (3004 ms) 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.ManifestPromoteRead 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.ManifestPromoteRead (3227 ms) 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TrySetDedupTier 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TrySetDedupTier (3019 ms) 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 22 tests from LibRadosTwoPoolsECPP (231227 ms total) 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] Global test environment tear-down 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [==========] 77 tests from 4 test suites ran. (849071 ms total) 2026-03-09T20:35:58.859 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ PASSED ] 77 tests. 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94260 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94260 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94549 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94549 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95043 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95043 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94825 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94825 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94618 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94618 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94341 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94341 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94733 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94733 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95186 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95186 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95225 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95225 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94385 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94385 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95077 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95077 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94237 2026-03-09T20:35:58.860 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94237 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94295 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94295 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94996 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94996 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94226 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94226 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94245 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94245 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95385 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95385 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94568 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94568 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95513 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95513 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94695 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94695 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95150 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95150 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95597 2026-03-09T20:35:58.861 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95597 2026-03-09T20:35:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:58 vm05 ceph-mon[61345]: osdmap e737: 8 total, 8 up, 8 in 2026-03-09T20:35:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:58 vm05 ceph-mon[61345]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:58 vm05 ceph-mon[51870]: osdmap e737: 8 total, 8 up, 8 in 2026-03-09T20:35:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:58 vm05 ceph-mon[51870]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:35:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:35:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:35:59.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:59.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:58 vm09 ceph-mon[54524]: osdmap e737: 8 total, 8 up, 8 in 2026-03-09T20:35:59.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-111"}]: dispatch 2026-03-09T20:35:59.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:58 vm09 ceph-mon[54524]: from='client.? v1:192.168.123.105:0/289205040' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-94573-111"}]': finished 2026-03-09T20:35:59.273 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:35:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=sqlstore.transactions t=2026-03-09T20:35:58.777337845Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-09T20:36:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:59 vm05 ceph-mon[61345]: pgmap v1138: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:36:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:35:59 vm05 ceph-mon[61345]: osdmap e738: 8 total, 8 up, 8 in 2026-03-09T20:36:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:59 vm05 ceph-mon[51870]: pgmap v1138: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:36:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:35:59 vm05 ceph-mon[51870]: osdmap e738: 8 total, 8 up, 8 in 2026-03-09T20:36:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:59 vm09 ceph-mon[54524]: pgmap v1138: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:36:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:35:59 vm09 ceph-mon[54524]: osdmap e738: 8 total, 8 up, 8 in 2026-03-09T20:36:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:36:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:36:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:36:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:01 vm05 ceph-mon[61345]: pgmap v1140: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:36:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:01 vm05 ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:36:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:01 vm05 ceph-mon[51870]: pgmap v1140: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:36:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:01 vm05 ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:36:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:01 vm09 ceph-mon[54524]: pgmap v1140: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:36:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:01 vm09 ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:36:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:03 vm05 ceph-mon[61345]: pgmap v1141: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:36:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:03 vm05 ceph-mon[51870]: pgmap v1141: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:36:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:03 vm09 ceph-mon[54524]: pgmap v1141: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:36:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:05 vm05 ceph-mon[61345]: pgmap v1142: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 673 B/s rd, 0 op/s 2026-03-09T20:36:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:05 vm05 ceph-mon[51870]: pgmap v1142: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 673 B/s rd, 0 op/s 2026-03-09T20:36:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:05 vm09 ceph-mon[54524]: pgmap v1142: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 673 B/s rd, 0 op/s 2026-03-09T20:36:06.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:36:05 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:36:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:07 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:07 vm09 ceph-mon[54524]: pgmap v1143: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:07 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:07 vm05 ceph-mon[61345]: pgmap v1143: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:07 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:07 vm05 ceph-mon[51870]: pgmap v1143: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:36:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:36:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:36:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:09 vm09 ceph-mon[54524]: pgmap v1144: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:36:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:09 vm05 ceph-mon[61345]: pgmap v1144: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:36:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:09 vm05 ceph-mon[51870]: pgmap v1144: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:36:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:11 vm09 ceph-mon[54524]: pgmap v1145: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:36:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:11 vm05 ceph-mon[61345]: pgmap v1145: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:36:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:11 vm05 ceph-mon[51870]: pgmap v1145: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:36:13.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:13 vm05 ceph-mon[61345]: pgmap v1146: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:13 vm05 ceph-mon[51870]: pgmap v1146: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:13.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:13 vm09 ceph-mon[54524]: pgmap v1146: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:15.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:15 vm09 ceph-mon[54524]: pgmap v1147: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:15.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:36:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:15 vm05 ceph-mon[61345]: pgmap v1147: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:36:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:15 vm05 ceph-mon[51870]: pgmap v1147: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:36:16.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:36:15 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:36:17.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:17 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:17.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:17 vm05 ceph-mon[61345]: pgmap v1148: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:17.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:17 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:17.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:17 vm05 ceph-mon[51870]: pgmap v1148: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:17 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:17 vm09 ceph-mon[54524]: pgmap v1148: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:36:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:36:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:36:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:19 vm05 ceph-mon[61345]: pgmap v1149: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:19 vm05 ceph-mon[51870]: pgmap v1149: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:19 vm09 ceph-mon[54524]: pgmap v1149: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:21 vm05 ceph-mon[61345]: pgmap v1150: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:21 vm05 ceph-mon[51870]: pgmap v1150: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:21 vm09 ceph-mon[54524]: pgmap v1150: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:24.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:23 vm09 ceph-mon[54524]: pgmap v1151: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:24.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:23 vm05 ceph-mon[61345]: pgmap v1151: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:24.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:23 vm05 ceph-mon[51870]: pgmap v1151: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:25.990 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:25 vm09 ceph-mon[54524]: pgmap v1152: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:25 vm05 ceph-mon[61345]: pgmap v1152: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:25 vm05 ceph-mon[51870]: pgmap v1152: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:26.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:36:25 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:36:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:27 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:27 vm09 ceph-mon[54524]: pgmap v1153: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:27 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:27 vm05 ceph-mon[61345]: pgmap v1153: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:27 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:27 vm05 ceph-mon[51870]: pgmap v1153: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:36:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:36:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:36:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:29 vm09 ceph-mon[54524]: pgmap v1154: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:30.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:29 vm05 ceph-mon[61345]: pgmap v1154: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:30.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:29 vm05 ceph-mon[51870]: pgmap v1154: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:31.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:36:31.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:36:31.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:36:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:31 vm05 ceph-mon[61345]: pgmap v1155: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:31 vm05 ceph-mon[51870]: pgmap v1155: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:31 vm09 ceph-mon[54524]: pgmap v1155: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:34.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:33 vm05 ceph-mon[61345]: pgmap v1156: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:34.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:33 vm05 ceph-mon[51870]: pgmap v1156: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:34.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:33 vm09 ceph-mon[54524]: pgmap v1156: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:36.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:35 vm05 ceph-mon[61345]: pgmap v1157: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:36.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:35 vm05 ceph-mon[51870]: pgmap v1157: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:36.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:35 vm09 ceph-mon[54524]: pgmap v1157: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:36.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:36:36 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:36:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:37 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:37 vm05 ceph-mon[61345]: pgmap v1158: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:37 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:37 vm05 ceph-mon[51870]: pgmap v1158: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:37 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:37 vm09 ceph-mon[54524]: pgmap v1158: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:36:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:36:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:36:40.075 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:39 vm09 ceph-mon[54524]: pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:39 vm05 ceph-mon[61345]: pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:39 vm05 ceph-mon[51870]: pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:41 vm05 ceph-mon[61345]: pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:41 vm05 ceph-mon[51870]: pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:42.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:41 vm09 ceph-mon[54524]: pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:43 vm05 ceph-mon[61345]: pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:43 vm05 ceph-mon[51870]: pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:43 vm09 ceph-mon[54524]: pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:45 vm05 ceph-mon[61345]: pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:36:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:45 vm05 ceph-mon[51870]: pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:36:46.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:45 vm09 ceph-mon[54524]: pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:46.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:36:46.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:36:46 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:36:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:47 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:47 vm05 ceph-mon[61345]: pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:47 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:47 vm05 ceph-mon[51870]: pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:48.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:47 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:47 vm09 ceph-mon[54524]: pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:36:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:36:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:36:49.868 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:49 vm05 ceph-mon[51870]: pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:49 vm05 ceph-mon[61345]: pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:49 vm09 ceph-mon[54524]: pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:51 vm05 ceph-mon[61345]: pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:51 vm05 ceph-mon[51870]: pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:52.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:51 vm09 ceph-mon[54524]: pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:53 vm05 ceph-mon[61345]: pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:53 vm05 ceph-mon[51870]: pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:53 vm09 ceph-mon[54524]: pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:55 vm09 ceph-mon[54524]: pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:55 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:36:56.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:36:56 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:36:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:55 vm05 ceph-mon[61345]: pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:55 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:36:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:55 vm05 ceph-mon[51870]: pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:36:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:55 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:36:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:56 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:36:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:56 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:36:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:56 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:36:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:56 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:36:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:56 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:36:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:56 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:36:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:56 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:36:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:56 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:36:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:56 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:36:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:57 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:36:57 vm09 ceph-mon[54524]: pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:57 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:36:57 vm05 ceph-mon[61345]: pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:57 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:36:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:57 vm05 ceph-mon[51870]: pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:36:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:36:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:36:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:37:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:00 vm09 ceph-mon[54524]: pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:00 vm05 ceph-mon[61345]: pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:36:59 vm05 ceph-mon[51870]: pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:01 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:37:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:01 vm09 ceph-mon[54524]: pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:01 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:37:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:01 vm05 ceph-mon[61345]: pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:01 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:37:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:01 vm05 ceph-mon[51870]: pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:03.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:03 vm09 ceph-mon[54524]: pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:03.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:03 vm05 ceph-mon[61345]: pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:03.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:03 vm05 ceph-mon[51870]: pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:05.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:05 vm09 ceph-mon[54524]: pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:05.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:05 vm05 ceph-mon[61345]: pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:05.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:05 vm05 ceph-mon[51870]: pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:06.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:37:06 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:37:07.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:07 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:07.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:07 vm05 ceph-mon[61345]: pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:07.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:07 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:07.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:07 vm05 ceph-mon[51870]: pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:07 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:07 vm09 ceph-mon[54524]: pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:37:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:37:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:37:10.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:09 vm09 ceph-mon[54524]: pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:09 vm05 ceph-mon[51870]: pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:09 vm05 ceph-mon[61345]: pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:11 vm05 ceph-mon[51870]: pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:11 vm05 ceph-mon[61345]: pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:11 vm09 ceph-mon[54524]: pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:13 vm05 ceph-mon[51870]: pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:13 vm05 ceph-mon[61345]: pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:13 vm09 ceph-mon[54524]: pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:15 vm05 ceph-mon[51870]: pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:37:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:15 vm05 ceph-mon[61345]: pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:37:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:15 vm09 ceph-mon[54524]: pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:37:16.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:37:16 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:37:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:17 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:17 vm05 ceph-mon[51870]: pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:17 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:17 vm05 ceph-mon[61345]: pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:17 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:17 vm09 ceph-mon[54524]: pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:37:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:37:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:37:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:19 vm09 ceph-mon[54524]: pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:37:20.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:19 vm05 ceph-mon[51870]: pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:37:20.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:19 vm05 ceph-mon[61345]: pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:37:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:21 vm09 ceph-mon[54524]: pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:22.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:21 vm05 ceph-mon[51870]: pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:22.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:21 vm05 ceph-mon[61345]: pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:23 vm09 ceph-mon[54524]: pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:37:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:23 vm05 ceph-mon[51870]: pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:37:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:23 vm05 ceph-mon[61345]: pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:37:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:26 vm05 ceph-mon[51870]: pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:37:26.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:26 vm05 ceph-mon[61345]: pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:37:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:26 vm09 ceph-mon[54524]: pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:37:26.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:37:26 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:37:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:27 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:27 vm05 ceph-mon[51870]: pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:27 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:27 vm05 ceph-mon[61345]: pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:27.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:27 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:27.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:27 vm09 ceph-mon[54524]: pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:37:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:37:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:37:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:29 vm09 ceph-mon[54524]: pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:29 vm05 ceph-mon[51870]: pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:29 vm05 ceph-mon[61345]: pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:37:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:37:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:37:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:31 vm09 ceph-mon[54524]: pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:31 vm05 ceph-mon[51870]: pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:31 vm05 ceph-mon[61345]: pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:33 vm09 ceph-mon[54524]: pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:33 vm05 ceph-mon[51870]: pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:33 vm05 ceph-mon[61345]: pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:35.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:35 vm09 ceph-mon[54524]: pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:35 vm05 ceph-mon[51870]: pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:35 vm05 ceph-mon[61345]: pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:36.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:37:36 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:37:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:37 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:37 vm09 ceph-mon[54524]: pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:37 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:37 vm05 ceph-mon[51870]: pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:37 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:37 vm05 ceph-mon[61345]: pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:37:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:37:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:37:39.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:39 vm09 ceph-mon[54524]: pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:39 vm05 ceph-mon[61345]: pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:39 vm05 ceph-mon[51870]: pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:41 vm05 ceph-mon[61345]: pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:41 vm05 ceph-mon[51870]: pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:41 vm09 ceph-mon[54524]: pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:43 vm05 ceph-mon[61345]: pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:43 vm05 ceph-mon[51870]: pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:43 vm09 ceph-mon[54524]: pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:45 vm05 ceph-mon[61345]: pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:37:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:45 vm05 ceph-mon[51870]: pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:37:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:45 vm09 ceph-mon[54524]: pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:37:46.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:37:46 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:37:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:47 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:47 vm05 ceph-mon[61345]: pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:47 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:47 vm05 ceph-mon[51870]: pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:47 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:47 vm09 ceph-mon[54524]: pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:37:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:37:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:37:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:49 vm05 ceph-mon[61345]: pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:49 vm05 ceph-mon[51870]: pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:49 vm09 ceph-mon[54524]: pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:51 vm05 ceph-mon[61345]: pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:51 vm05 ceph-mon[51870]: pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:51 vm09 ceph-mon[54524]: pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:53 vm05 ceph-mon[61345]: pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:53 vm05 ceph-mon[51870]: pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:53 vm09 ceph-mon[54524]: pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:55 vm05 ceph-mon[61345]: pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:55 vm05 ceph-mon[51870]: pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:55 vm09 ceph-mon[54524]: pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:56.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:37:56 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:37:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:56 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:37:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:56 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:37:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:56 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:37:56.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:56 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:37:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:56 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:37:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:56 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:37:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:56 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:37:56.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:56 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:37:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:56 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:37:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:56 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:37:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:56 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:37:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:56 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:37:57.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:57 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:57.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:57 vm05 ceph-mon[61345]: pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:57.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:57 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:57.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:57 vm05 ceph-mon[51870]: pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:57 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:37:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:57 vm09 ceph-mon[54524]: pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:37:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:37:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:37:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:37:59.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:37:59 vm05 ceph-mon[61345]: pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:37:59.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:37:59 vm05 ceph-mon[51870]: pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:37:59 vm09 ceph-mon[54524]: pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:38:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:38:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:38:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:01 vm09 ceph-mon[54524]: pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:01 vm05 ceph-mon[61345]: pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:01 vm05 ceph-mon[51870]: pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:04.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:03 vm09 ceph-mon[54524]: pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:03 vm05 ceph-mon[61345]: pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:03 vm05 ceph-mon[51870]: pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:06.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:05 vm09 ceph-mon[54524]: pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:05 vm05 ceph-mon[61345]: pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:05 vm05 ceph-mon[51870]: pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:06.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:38:06 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:38:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:07 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:07 vm09 ceph-mon[54524]: pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:07 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:07 vm05 ceph-mon[61345]: pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:07 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:07 vm05 ceph-mon[51870]: pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:38:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:38:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:38:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:10 vm05 ceph-mon[61345]: pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:09 vm05 ceph-mon[51870]: pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:10 vm09 ceph-mon[54524]: pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:11 vm09 ceph-mon[54524]: pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:11 vm05 ceph-mon[61345]: pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:11 vm05 ceph-mon[51870]: pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:13 vm09 ceph-mon[54524]: pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:13 vm05 ceph-mon[61345]: pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:13 vm05 ceph-mon[51870]: pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:15.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:15 vm05 ceph-mon[61345]: pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:15 vm05 ceph-mon[51870]: pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:15 vm09 ceph-mon[54524]: pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:16 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:38:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:16 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:38:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:16 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:38:16.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:38:16 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:38:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:17 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:17 vm05 ceph-mon[61345]: pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:17 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:17 vm05 ceph-mon[51870]: pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:17.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:17 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:17.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:17 vm09 ceph-mon[54524]: pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:38:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:38:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:38:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:19 vm09 ceph-mon[54524]: pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:19 vm05 ceph-mon[61345]: pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:19 vm05 ceph-mon[51870]: pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:21.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:21 vm09 ceph-mon[54524]: pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:21 vm05 ceph-mon[61345]: pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:21 vm05 ceph-mon[51870]: pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:23.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:23 vm09 ceph-mon[54524]: pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:23 vm05 ceph-mon[61345]: pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:23 vm05 ceph-mon[51870]: pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:25.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:25 vm09 ceph-mon[54524]: pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:25 vm05 ceph-mon[61345]: pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:25 vm05 ceph-mon[51870]: pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:26.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:38:26 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:38:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:27 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:27 vm05 ceph-mon[61345]: pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:27 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:27 vm05 ceph-mon[51870]: pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:27 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:27 vm09 ceph-mon[54524]: pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:38:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:38:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:38:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:29 vm09 ceph-mon[54524]: pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:30.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:29 vm05 ceph-mon[61345]: pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:30.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:29 vm05 ceph-mon[51870]: pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:31.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:38:31.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:38:31.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:38:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:31 vm05 ceph-mon[61345]: pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:31 vm05 ceph-mon[51870]: pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:31 vm09 ceph-mon[54524]: pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:34.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:33 vm05 ceph-mon[61345]: pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:34.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:33 vm05 ceph-mon[51870]: pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:34.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:33 vm09 ceph-mon[54524]: pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:36 vm05 ceph-mon[61345]: pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:36 vm05 ceph-mon[51870]: pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:36.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:36 vm09 ceph-mon[54524]: pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:36.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:38:36 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:38:37.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:37 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:37.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:37 vm09 ceph-mon[54524]: pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:37.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:37 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:37.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:37 vm05 ceph-mon[61345]: pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:37 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:37 vm05 ceph-mon[51870]: pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:38:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:38:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:38:40.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:39 vm09 ceph-mon[54524]: pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:39 vm05 ceph-mon[61345]: pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:39 vm05 ceph-mon[51870]: pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:41 vm09 ceph-mon[54524]: pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:41 vm05 ceph-mon[61345]: pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:41 vm05 ceph-mon[51870]: pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:43 vm09 ceph-mon[54524]: pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:43 vm05 ceph-mon[61345]: pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:43 vm05 ceph-mon[51870]: pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:45 vm09 ceph-mon[54524]: pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:38:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:45 vm05 ceph-mon[61345]: pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:38:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:45 vm05 ceph-mon[51870]: pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:38:46.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:38:46 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:38:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:47 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:47 vm09 ceph-mon[54524]: pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:47 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:47 vm05 ceph-mon[61345]: pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:47 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:47 vm05 ceph-mon[51870]: pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:38:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:38:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:38:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:49 vm09 ceph-mon[54524]: pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:49 vm05 ceph-mon[61345]: pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:49 vm05 ceph-mon[51870]: pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:51 vm09 ceph-mon[54524]: pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:51 vm05 ceph-mon[61345]: pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:51 vm05 ceph-mon[51870]: pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:53 vm09 ceph-mon[54524]: pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:53 vm05 ceph-mon[61345]: pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:53 vm05 ceph-mon[51870]: pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:55 vm09 ceph-mon[54524]: pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:55 vm05 ceph-mon[61345]: pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:55 vm05 ceph-mon[51870]: pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:38:56.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:38:56 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:38:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:56 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:38:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:56 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:56 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:56 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:56 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:56 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:38:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:56 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:56 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:56 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:56 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:56 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:38:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:56 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:56 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:56 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:57.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:56 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:57 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:57 vm05 ceph-mon[61345]: pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:57 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:38:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:57 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:38:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:57 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:57 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:57 vm05 ceph-mon[51870]: pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:57 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:38:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:57 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:38:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:57 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:57 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:38:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:57 vm09 ceph-mon[54524]: pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:38:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:57 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:38:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:57 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:38:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:57 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:38:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:38:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:38:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:39:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:38:59 vm05 ceph-mon[61345]: pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:38:59 vm05 ceph-mon[51870]: pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:38:59 vm09 ceph-mon[54524]: pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:39:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:39:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:39:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:01 vm09 ceph-mon[54524]: pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:01 vm05 ceph-mon[61345]: pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:01 vm05 ceph-mon[51870]: pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:03 vm09 ceph-mon[54524]: pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:03 vm05 ceph-mon[61345]: pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:03 vm05 ceph-mon[51870]: pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:05 vm09 ceph-mon[54524]: pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:06.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:39:06 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:39:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:05 vm05 ceph-mon[61345]: pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:05 vm05 ceph-mon[51870]: pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:08.212 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:07 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:08.212 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:07 vm05 ceph-mon[61345]: pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:08.212 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:07 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:08.212 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:07 vm05 ceph-mon[51870]: pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:07 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:07 vm09 ceph-mon[54524]: pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:39:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:39:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:39:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:09 vm09 ceph-mon[54524]: pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:09 vm05 ceph-mon[61345]: pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:09 vm05 ceph-mon[51870]: pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:11 vm09 ceph-mon[54524]: pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:11 vm05 ceph-mon[61345]: pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:11 vm05 ceph-mon[51870]: pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:14 vm05 ceph-mon[61345]: pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:14 vm05 ceph-mon[51870]: pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:14.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:14 vm09 ceph-mon[54524]: pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:15.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:15 vm05 ceph-mon[61345]: pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:15.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:15 vm05 ceph-mon[51870]: pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:15 vm09 ceph-mon[54524]: pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:16 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:39:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:16 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:39:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:16 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:39:16.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:39:16 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:39:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:17 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:17 vm05 ceph-mon[61345]: pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:17 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:17 vm05 ceph-mon[51870]: pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:17.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:17 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:17.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:17 vm09 ceph-mon[54524]: pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:39:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:39:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:39:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:19 vm05 ceph-mon[61345]: pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:19 vm05 ceph-mon[51870]: pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:19 vm09 ceph-mon[54524]: pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:21 vm05 ceph-mon[61345]: pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:21 vm05 ceph-mon[51870]: pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:21 vm09 ceph-mon[54524]: pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:23 vm05 ceph-mon[61345]: pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:23 vm05 ceph-mon[51870]: pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:24.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:23 vm09 ceph-mon[54524]: pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:25 vm05 ceph-mon[61345]: pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:25 vm05 ceph-mon[51870]: pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:26.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:25 vm09 ceph-mon[54524]: pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:26.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:39:26 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:39:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:27 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:27 vm05 ceph-mon[61345]: pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:27 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:27 vm05 ceph-mon[51870]: pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:27 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:27 vm09 ceph-mon[54524]: pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:39:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:39:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:39:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:29 vm05 ceph-mon[61345]: pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:29 vm05 ceph-mon[51870]: pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:29 vm09 ceph-mon[54524]: pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:39:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:39:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:39:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:31 vm05 ceph-mon[61345]: pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:31 vm05 ceph-mon[51870]: pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:31 vm09 ceph-mon[54524]: pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:33 vm05 ceph-mon[61345]: pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:33 vm05 ceph-mon[51870]: pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:33 vm09 ceph-mon[54524]: pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:35 vm09 ceph-mon[54524]: pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:36.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:35 vm05 ceph-mon[61345]: pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:36.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:35 vm05 ceph-mon[51870]: pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:36.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:39:36 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:39:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:37 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:37 vm09 ceph-mon[54524]: pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:37 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:37 vm05 ceph-mon[61345]: pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:37 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:37 vm05 ceph-mon[51870]: pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:39:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:39:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:39:40.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:39 vm09 ceph-mon[54524]: pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:39 vm05 ceph-mon[61345]: pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:39 vm05 ceph-mon[51870]: pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:41 vm09 ceph-mon[54524]: pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:41 vm05 ceph-mon[61345]: pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:41 vm05 ceph-mon[51870]: pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:43 vm09 ceph-mon[54524]: pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:43 vm05 ceph-mon[61345]: pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:43 vm05 ceph-mon[51870]: pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:45 vm09 ceph-mon[54524]: pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:46.024 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:39:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:45 vm05 ceph-mon[61345]: pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:39:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:45 vm05 ceph-mon[51870]: pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:39:46.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:39:46 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:39:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:47 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:47 vm09 ceph-mon[54524]: pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:47 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:47 vm05 ceph-mon[61345]: pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:47 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:47 vm05 ceph-mon[51870]: pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:39:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:39:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:39:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:49 vm09 ceph-mon[54524]: pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:50.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:49 vm05 ceph-mon[61345]: pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:50.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:49 vm05 ceph-mon[51870]: pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:51 vm09 ceph-mon[54524]: pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:52.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:51 vm05 ceph-mon[61345]: pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:52.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:51 vm05 ceph-mon[51870]: pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:53 vm09 ceph-mon[54524]: pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:53 vm05 ceph-mon[61345]: pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:53 vm05 ceph-mon[51870]: pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:55 vm09 ceph-mon[54524]: pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:56.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:39:56 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:39:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:55 vm05 ceph-mon[61345]: pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:55 vm05 ceph-mon[51870]: pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:57.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:57 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:57.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:57 vm09 ceph-mon[54524]: pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:57.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:57 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:39:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:57 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:57 vm05 ceph-mon[61345]: pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:57 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:39:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:57 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:39:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:57 vm05 ceph-mon[51870]: pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:39:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:57 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:39:58.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:58 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:39:58.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:58 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:39:58.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:58 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:39:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:58 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:39:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:58 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:39:58.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:58 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:39:58.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:58 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:39:58.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:58 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:39:58.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:58 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:39:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:39:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:39:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:39:59.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:39:59 vm09 ceph-mon[54524]: pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:39:59 vm05 ceph-mon[61345]: pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:39:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:39:59 vm05 ceph-mon[51870]: pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:00 vm09 ceph-mon[54524]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T20:40:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:00 vm09 ceph-mon[54524]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T20:40:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:00 vm09 ceph-mon[54524]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T20:40:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:00 vm09 ceph-mon[54524]: application not enabled on pool 'WatchNotifyvm05-95715-1' 2026-03-09T20:40:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:00 vm09 ceph-mon[54524]: application not enabled on pool 'AssertExistsvm05-95743-1' 2026-03-09T20:40:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:00 vm09 ceph-mon[54524]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T20:40:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:00 vm05 ceph-mon[61345]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T20:40:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:00 vm05 ceph-mon[61345]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T20:40:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:00 vm05 ceph-mon[61345]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T20:40:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:00 vm05 ceph-mon[61345]: application not enabled on pool 'WatchNotifyvm05-95715-1' 2026-03-09T20:40:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:00 vm05 ceph-mon[61345]: application not enabled on pool 'AssertExistsvm05-95743-1' 2026-03-09T20:40:00.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:00 vm05 ceph-mon[61345]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T20:40:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:00 vm05 ceph-mon[51870]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T20:40:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:00 vm05 ceph-mon[51870]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-09T20:40:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:00 vm05 ceph-mon[51870]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-09T20:40:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:00 vm05 ceph-mon[51870]: application not enabled on pool 'WatchNotifyvm05-95715-1' 2026-03-09T20:40:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:00 vm05 ceph-mon[51870]: application not enabled on pool 'AssertExistsvm05-95743-1' 2026-03-09T20:40:00.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:00 vm05 ceph-mon[51870]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-09T20:40:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:01 vm09 ceph-mon[54524]: pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:01 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:40:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:01 vm05 ceph-mon[61345]: pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:01.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:01 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:40:01.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:01 vm05 ceph-mon[51870]: pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:01.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:01 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:40:03.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:03 vm09 ceph-mon[54524]: pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:03.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:03 vm05 ceph-mon[61345]: pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:03.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:03 vm05 ceph-mon[51870]: pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:05.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:05 vm09 ceph-mon[54524]: pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:05.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:05 vm05 ceph-mon[61345]: pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:05.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:05 vm05 ceph-mon[51870]: pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:06.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:40:06 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:40:07.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:07 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:07.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:07 vm05 ceph-mon[61345]: pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:07.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:07 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:07.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:07 vm05 ceph-mon[51870]: pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:08.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:07 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:08.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:07 vm09 ceph-mon[54524]: pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:40:08 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:40:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:40:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:09 vm05 ceph-mon[61345]: pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:09 vm05 ceph-mon[51870]: pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:10.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:09 vm09 ceph-mon[54524]: pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:11.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:11 vm05 ceph-mon[61345]: pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:11.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:11 vm05 ceph-mon[51870]: pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:11 vm09 ceph-mon[54524]: pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:13.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:13 vm05 ceph-mon[61345]: pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:13.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:13 vm05 ceph-mon[51870]: pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:13 vm09 ceph-mon[54524]: pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:15.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:15 vm09 ceph-mon[54524]: pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:15.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:40:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:15 vm05 ceph-mon[61345]: pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:15 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:40:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:15 vm05 ceph-mon[51870]: pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:15 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:40:16.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:40:16 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:40:17.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:17 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:17.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:17 vm05 ceph-mon[61345]: pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:17.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:17 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:17.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:17 vm05 ceph-mon[51870]: pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:18.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:17 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:17 vm09 ceph-mon[54524]: pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:40:18 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:40:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:40:19.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:19 vm05 ceph-mon[61345]: pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:19.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:19 vm05 ceph-mon[51870]: pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:20.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:19 vm09 ceph-mon[54524]: pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:21 vm05 ceph-mon[61345]: pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:21 vm05 ceph-mon[51870]: pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:22.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:21 vm09 ceph-mon[54524]: pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:23 vm05 ceph-mon[61345]: pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:23 vm05 ceph-mon[51870]: pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:24.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:23 vm09 ceph-mon[54524]: pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:25 vm05 ceph-mon[61345]: pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:25 vm05 ceph-mon[51870]: pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:26.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:25 vm09 ceph-mon[54524]: pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:26.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:40:26 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:40:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:27 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:27 vm09 ceph-mon[54524]: pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:27 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:27 vm05 ceph-mon[61345]: pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:27 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:27 vm05 ceph-mon[51870]: pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:40:28 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:40:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:40:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:29 vm09 ceph-mon[54524]: pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:30.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:29 vm05 ceph-mon[51870]: pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:30.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:29 vm05 ceph-mon[61345]: pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:40:31.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:30 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:40:31.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:30 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:40:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:31 vm09 ceph-mon[54524]: pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:31 vm05 ceph-mon[61345]: pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:31 vm05 ceph-mon[51870]: pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:33 vm09 ceph-mon[54524]: pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:34.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:33 vm05 ceph-mon[61345]: pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:34.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:33 vm05 ceph-mon[51870]: pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:35 vm09 ceph-mon[54524]: pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:36.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:35 vm05 ceph-mon[61345]: pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:36.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:35 vm05 ceph-mon[51870]: pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:36.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:40:36 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:40:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:37 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:37 vm09 ceph-mon[54524]: pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:37 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:37 vm05 ceph-mon[61345]: pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:37 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:37 vm05 ceph-mon[51870]: pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:40:38 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:40:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:40:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:39 vm09 ceph-mon[54524]: pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:39 vm05 ceph-mon[61345]: pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:39 vm05 ceph-mon[51870]: pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:41 vm09 ceph-mon[54524]: pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:41 vm05 ceph-mon[61345]: pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:41 vm05 ceph-mon[51870]: pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:43 vm05 ceph-mon[61345]: pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:43 vm05 ceph-mon[51870]: pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:43 vm09 ceph-mon[54524]: pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:45 vm05 ceph-mon[61345]: pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:45 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:40:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:45 vm05 ceph-mon[51870]: pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:45 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:40:46.219 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:45 vm09 ceph-mon[54524]: pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:46.219 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:45 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:40:46.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:40:46 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:40:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:47 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:47 vm05 ceph-mon[61345]: pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:47 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:47 vm05 ceph-mon[51870]: pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:47 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:47 vm09 ceph-mon[54524]: pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:40:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:40:48 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:40:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:40:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:49 vm05 ceph-mon[61345]: pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:49 vm05 ceph-mon[51870]: pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:50.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:49 vm09 ceph-mon[54524]: pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:51 vm05 ceph-mon[61345]: pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:51 vm05 ceph-mon[51870]: pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:52.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:51 vm09 ceph-mon[54524]: pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:53 vm09 ceph-mon[54524]: pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:53 vm05 ceph-mon[61345]: pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:53 vm05 ceph-mon[51870]: pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:56.230 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:55 vm09 ceph-mon[54524]: pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:55 vm05 ceph-mon[61345]: pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:55 vm05 ceph-mon[51870]: pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:40:56.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:40:56 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:40:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:57 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:57 vm09 ceph-mon[54524]: pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:57 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:40:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:57 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:40:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:57 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:40:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:57 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:40:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:57 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:40:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:57 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:40:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:57 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:40:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:57 vm09 ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[61345]: pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[51870]: pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:40:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:40:58.411 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:40:58.411 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:40:58.411 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:57 vm05 ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:40:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:40:58 vm05 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:40:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:40:59.273 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:40:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=cleanup t=2026-03-09T20:40:58.785972432Z level=info msg="Completed cleanup jobs" duration=1.429406ms 2026-03-09T20:40:59.273 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:40:58 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=plugins.update.checker t=2026-03-09T20:40:58.944280082Z level=info msg="Update check succeeded" duration=50.344818ms 2026-03-09T20:41:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:40:59 vm09 ceph-mon[54524]: pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:40:59 vm05 ceph-mon[61345]: pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:40:59 vm05 ceph-mon[51870]: pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:01.230 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:00 vm05 ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:41:01.230 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:00 vm05 ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:41:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:00 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:41:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:01 vm09 ceph-mon[54524]: pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:01 vm05.local ceph-mon[61345]: pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:01 vm05.local ceph-mon[51870]: pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:03 vm09 ceph-mon[54524]: pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:03 vm05.local ceph-mon[61345]: pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:03 vm05.local ceph-mon[51870]: pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:06.240 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:05 vm09 ceph-mon[54524]: pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:05 vm05.local ceph-mon[61345]: pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:05 vm05.local ceph-mon[51870]: pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:06.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:41:06 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:41:08.212 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:07 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:08.212 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:07 vm05.local ceph-mon[61345]: pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:08.212 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:07 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:08.212 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:07 vm05.local ceph-mon[51870]: pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:07 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:07 vm09 ceph-mon[54524]: pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:41:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:41:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:41:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:09 vm09 ceph-mon[54524]: pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:09 vm05.local ceph-mon[61345]: pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:09 vm05.local ceph-mon[51870]: pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:11 vm09 ceph-mon[54524]: pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:11 vm05.local ceph-mon[61345]: pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:11 vm05.local ceph-mon[51870]: pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:13 vm09 ceph-mon[54524]: pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:13 vm05.local ceph-mon[61345]: pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:13 vm05.local ceph-mon[51870]: pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:16.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:15 vm09 ceph-mon[54524]: pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:16.251 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:15 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:41:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:15 vm05.local ceph-mon[61345]: pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:15 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:41:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:15 vm05.local ceph-mon[51870]: pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:15 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:41:16.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:41:16 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:41:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:18 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:18 vm05.local ceph-mon[61345]: pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:18 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:18 vm05.local ceph-mon[51870]: pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:18.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:18 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:18.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:18 vm09 ceph-mon[54524]: pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:41:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:41:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:41:19.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:19 vm05.local ceph-mon[61345]: pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:19.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:19 vm05.local ceph-mon[51870]: pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:19.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:19 vm09 ceph-mon[54524]: pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:21.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:21 vm05.local ceph-mon[61345]: pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:21.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:21 vm05.local ceph-mon[51870]: pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:21 vm09 ceph-mon[54524]: pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:23.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:23 vm05.local ceph-mon[61345]: pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:23.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:23 vm05.local ceph-mon[51870]: pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:24.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:23 vm09 ceph-mon[54524]: pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:25 vm05.local ceph-mon[61345]: pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:25 vm05.local ceph-mon[51870]: pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:26.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:25 vm09 ceph-mon[54524]: pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:26.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:41:26 vm09 ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:41:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:27 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:27 vm05.local ceph-mon[61345]: pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:27 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:27 vm05.local ceph-mon[51870]: pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:27 vm09 ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:27 vm09 ceph-mon[54524]: pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:41:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:41:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:41:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:29 vm05.local ceph-mon[61345]: pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:29 vm05.local ceph-mon[51870]: pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:29 vm09 ceph-mon[54524]: pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:30 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:41:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:30 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:41:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:30 vm09 ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:41:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:31 vm05.local ceph-mon[61345]: pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:31 vm05.local ceph-mon[51870]: pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:31 vm09 ceph-mon[54524]: pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:33 vm05.local ceph-mon[61345]: pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:33 vm05.local ceph-mon[51870]: pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:33 vm09.local ceph-mon[54524]: pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:35 vm09.local ceph-mon[54524]: pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:36.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:35 vm05.local ceph-mon[61345]: pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:36.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:35 vm05.local ceph-mon[51870]: pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:36.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:41:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:41:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:37 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:37 vm09.local ceph-mon[54524]: pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:37 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:37 vm05.local ceph-mon[61345]: pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:37 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:37 vm05.local ceph-mon[51870]: pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:41:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:41:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:41:40.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:39 vm09.local ceph-mon[54524]: pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:40.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:39 vm05.local ceph-mon[61345]: pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:39 vm05.local ceph-mon[51870]: pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:42.022 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:41 vm09.local ceph-mon[54524]: pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:41 vm05.local ceph-mon[61345]: pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:41 vm05.local ceph-mon[51870]: pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:43 vm09.local ceph-mon[54524]: pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:43 vm05.local ceph-mon[61345]: pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:43 vm05.local ceph-mon[51870]: pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:45 vm09.local ceph-mon[54524]: pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:45 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:41:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:45 vm05.local ceph-mon[61345]: pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:45 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:41:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:45 vm05.local ceph-mon[51870]: pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:45 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:41:46.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:41:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:41:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:47 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:47 vm09.local ceph-mon[54524]: pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:47 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:47 vm05.local ceph-mon[61345]: pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:47 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:47 vm05.local ceph-mon[51870]: pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:41:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:41:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:41:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:49 vm09.local ceph-mon[54524]: pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:49 vm05.local ceph-mon[61345]: pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:49 vm05.local ceph-mon[51870]: pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:51 vm09.local ceph-mon[54524]: pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:51 vm05.local ceph-mon[61345]: pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:51 vm05.local ceph-mon[51870]: pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:53 vm05.local ceph-mon[61345]: pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:53 vm05.local ceph-mon[51870]: pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:53 vm09.local ceph-mon[54524]: pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:55 vm05.local ceph-mon[61345]: pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:55 vm05.local ceph-mon[51870]: pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:55 vm09.local ceph-mon[54524]: pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:56.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:41:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:41:57.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:57 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:57.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:57 vm09.local ceph-mon[54524]: pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:57 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:57 vm05.local ceph-mon[61345]: pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:57 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:41:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:57 vm05.local ceph-mon[51870]: pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:41:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:58 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:41:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:58 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:41:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:58 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:41:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:58 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:41:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:58 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:41:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:58 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:41:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:58 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:41:58.635 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:58 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:41:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:58 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:41:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:58 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:41:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:58 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:41:58.635 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:58 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:41:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:41:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:41:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:41:59.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:41:59 vm09.local ceph-mon[54524]: pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:59.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:41:59 vm05.local ceph-mon[61345]: pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:41:59.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:41:59 vm05.local ceph-mon[51870]: pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:00.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:42:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:42:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:42:01.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:01 vm05.local ceph-mon[61345]: pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:01.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:01 vm05.local ceph-mon[51870]: pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:01 vm09.local ceph-mon[54524]: pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:03.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:03 vm05.local ceph-mon[61345]: pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:03.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:03 vm05.local ceph-mon[51870]: pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:04.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:03 vm09.local ceph-mon[54524]: pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:05.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:05 vm05.local ceph-mon[61345]: pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:05.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:05 vm05.local ceph-mon[51870]: pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:06.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:05 vm09.local ceph-mon[54524]: pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:06.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:42:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:42:07.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:07 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:07.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:07 vm05.local ceph-mon[61345]: pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:07.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:07 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:07.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:07 vm05.local ceph-mon[51870]: pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:07 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:07 vm09.local ceph-mon[54524]: pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:42:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:42:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:42:09.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:09 vm05.local ceph-mon[61345]: pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:09.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:09 vm05.local ceph-mon[51870]: pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:10.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:09 vm09.local ceph-mon[54524]: pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:11 vm09.local ceph-mon[54524]: pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:11 vm05.local ceph-mon[61345]: pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:11 vm05.local ceph-mon[51870]: pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:13 vm09.local ceph-mon[54524]: pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:13 vm05.local ceph-mon[61345]: pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:13 vm05.local ceph-mon[51870]: pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:15 vm05.local ceph-mon[61345]: pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:15 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:42:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:15 vm05.local ceph-mon[51870]: pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:15 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:42:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:15 vm09.local ceph-mon[54524]: pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:15 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:42:16.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:42:16 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:42:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:18 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:18 vm09.local ceph-mon[54524]: pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:18 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:18 vm05.local ceph-mon[61345]: pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:18 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:18 vm05.local ceph-mon[51870]: pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:42:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:42:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:42:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:20 vm09.local ceph-mon[54524]: pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:20.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:20 vm05.local ceph-mon[61345]: pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:20.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:20 vm05.local ceph-mon[51870]: pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:22 vm09.local ceph-mon[54524]: pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:22.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:22 vm05.local ceph-mon[61345]: pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:22.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:22 vm05.local ceph-mon[51870]: pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:23.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:23 vm05.local ceph-mon[61345]: pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:23.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:23 vm05.local ceph-mon[51870]: pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:23.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:23 vm09.local ceph-mon[54524]: pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:25 vm05.local ceph-mon[61345]: pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:25 vm05.local ceph-mon[51870]: pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:26.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:25 vm09.local ceph-mon[54524]: pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:26.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:42:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:42:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:27 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:27 vm05.local ceph-mon[61345]: pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:27 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:27 vm05.local ceph-mon[51870]: pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:27 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:27 vm09.local ceph-mon[54524]: pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:42:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:42:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:42:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:29 vm05.local ceph-mon[61345]: pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:29 vm05.local ceph-mon[51870]: pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:29 vm09.local ceph-mon[54524]: pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:30 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:42:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:30 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:42:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:30 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:42:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:31 vm05.local ceph-mon[61345]: pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:31 vm05.local ceph-mon[51870]: pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:31 vm09.local ceph-mon[54524]: pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:33 vm05.local ceph-mon[61345]: pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:33 vm05.local ceph-mon[51870]: pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:33 vm09.local ceph-mon[54524]: pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:35 vm05.local ceph-mon[61345]: pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:35 vm05.local ceph-mon[51870]: pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:35 vm09.local ceph-mon[54524]: pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:36.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:42:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:42:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:37 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:37 vm05.local ceph-mon[61345]: pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:37 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:37 vm05.local ceph-mon[51870]: pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:37 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:37 vm09.local ceph-mon[54524]: pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:42:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:42:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:42:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:39 vm05.local ceph-mon[61345]: pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:39 vm05.local ceph-mon[51870]: pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:40.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:39 vm09.local ceph-mon[54524]: pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:41 vm05.local ceph-mon[61345]: pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:41 vm05.local ceph-mon[51870]: pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:41 vm09.local ceph-mon[54524]: pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:43 vm09.local ceph-mon[54524]: pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:43 vm05.local ceph-mon[61345]: pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:43 vm05.local ceph-mon[51870]: pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:45 vm09.local ceph-mon[54524]: pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:45 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:42:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:45 vm05.local ceph-mon[61345]: pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:45 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:42:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:45 vm05.local ceph-mon[51870]: pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:45 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:42:46.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:42:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:42:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:47 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:47 vm09.local ceph-mon[54524]: pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:47 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:47 vm05.local ceph-mon[61345]: pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:47 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:47 vm05.local ceph-mon[51870]: pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:42:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:42:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:42:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:49 vm09.local ceph-mon[54524]: pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:49 vm05.local ceph-mon[61345]: pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:49 vm05.local ceph-mon[51870]: pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:51 vm09.local ceph-mon[54524]: pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:51 vm05.local ceph-mon[61345]: pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:51 vm05.local ceph-mon[51870]: pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:53 vm09.local ceph-mon[54524]: pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:53 vm05.local ceph-mon[61345]: pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:53 vm05.local ceph-mon[51870]: pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:55 vm09.local ceph-mon[54524]: pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:55 vm05.local ceph-mon[61345]: pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:55 vm05.local ceph-mon[51870]: pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:42:56.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:42:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:42:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:57 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:58.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:57 vm09.local ceph-mon[54524]: pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:57 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:57 vm05.local ceph-mon[51870]: pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:57 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:42:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:57 vm05.local ceph-mon[61345]: pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:42:58.635 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:42:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:42:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:42:58.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:58 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:42:58.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:58 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:42:59.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:58 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:43:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:59 vm09.local ceph-mon[54524]: pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:59 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:59 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:59 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:59 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:59 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:59 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:43:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:42:59 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[61345]: pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[51870]: pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:43:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:42:59 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:43:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:01 vm09.local ceph-mon[54524]: pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:01 vm05.local ceph-mon[61345]: pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:01 vm05.local ceph-mon[51870]: pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:03 vm05.local ceph-mon[61345]: pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:03 vm05.local ceph-mon[51870]: pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:04.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:03 vm09.local ceph-mon[54524]: pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:05 vm05.local ceph-mon[61345]: pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:05 vm05.local ceph-mon[51870]: pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:06.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:05 vm09.local ceph-mon[54524]: pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:06.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:43:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:43:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:07 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:07 vm05.local ceph-mon[61345]: pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:07 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:07 vm05.local ceph-mon[51870]: pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:07 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:07 vm09.local ceph-mon[54524]: pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:43:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:43:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:43:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:09 vm09.local ceph-mon[54524]: pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:09 vm05.local ceph-mon[61345]: pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:09 vm05.local ceph-mon[51870]: pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:11 vm09.local ceph-mon[54524]: pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:11 vm05.local ceph-mon[61345]: pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:11 vm05.local ceph-mon[51870]: pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:14.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:13 vm09.local ceph-mon[54524]: pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:13 vm05.local ceph-mon[61345]: pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:13 vm05.local ceph-mon[51870]: pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:15 vm09.local ceph-mon[54524]: pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:15 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:16 vm05.local ceph-mon[61345]: pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:16 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:15 vm05.local ceph-mon[51870]: pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:15 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:16.769 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:43:16 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:43:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:18 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:18 vm09.local ceph-mon[54524]: pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:18 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:18 vm05.local ceph-mon[61345]: pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:18 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:18 vm05.local ceph-mon[51870]: pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:43:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:43:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:43:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:20 vm09.local ceph-mon[54524]: pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:20.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:20 vm05.local ceph-mon[61345]: pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:20.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:20 vm05.local ceph-mon[51870]: pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:22.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:22 vm09.local ceph-mon[54524]: pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:22.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:22 vm05.local ceph-mon[61345]: pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:22.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:22 vm05.local ceph-mon[51870]: pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:24 vm09.local ceph-mon[54524]: pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:24.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:24 vm05.local ceph-mon[61345]: pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:24 vm05.local ceph-mon[51870]: pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:26.356 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:26 vm09.local ceph-mon[54524]: pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:26.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:26 vm05.local ceph-mon[61345]: pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:26 vm05.local ceph-mon[51870]: pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:26.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:43:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:43:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:28 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:28 vm05.local ceph-mon[61345]: pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:28 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:28 vm05.local ceph-mon[51870]: pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:28.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:28 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:28.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:28 vm09.local ceph-mon[54524]: pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:43:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:43:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:43:29.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:29 vm05.local ceph-mon[61345]: pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:29.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:29 vm05.local ceph-mon[51870]: pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:29.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:29 vm09.local ceph-mon[54524]: pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:30 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:30 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:30 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:31 vm05.local ceph-mon[61345]: pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:31 vm05.local ceph-mon[51870]: pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:31 vm09.local ceph-mon[54524]: pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:33 vm09.local ceph-mon[54524]: pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:34.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:33 vm05.local ceph-mon[61345]: pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:34.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:33 vm05.local ceph-mon[51870]: pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:35 vm09.local ceph-mon[54524]: pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:36.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:35 vm05.local ceph-mon[61345]: pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:36.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:35 vm05.local ceph-mon[51870]: pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:36.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:43:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:43:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:37 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:37 vm09.local ceph-mon[54524]: pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:37 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:37 vm05.local ceph-mon[61345]: pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:37 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:37 vm05.local ceph-mon[51870]: pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:43:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:43:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:43:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:39 vm05.local ceph-mon[61345]: pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:39 vm05.local ceph-mon[51870]: pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:40.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:39 vm09.local ceph-mon[54524]: pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:41 vm05.local ceph-mon[61345]: pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:41 vm05.local ceph-mon[51870]: pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:42.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:41 vm09.local ceph-mon[54524]: pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:44.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:44 vm05.local ceph-mon[61345]: pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:44.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:44 vm05.local ceph-mon[51870]: pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:44 vm09.local ceph-mon[54524]: pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:45.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:45 vm09.local ceph-mon[54524]: pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:45.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:45 vm05.local ceph-mon[61345]: pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:45.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:45 vm05.local ceph-mon[51870]: pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:46 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:46.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:43:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:43:46.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:46 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:46.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:46 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:43:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:47 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:47.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:47 vm05.local ceph-mon[61345]: pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:47 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:47.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:47 vm05.local ceph-mon[51870]: pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:47.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:47 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:47.787 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:47 vm09.local ceph-mon[54524]: pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:43:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:43:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:43:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:49 vm05.local ceph-mon[61345]: pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:49.914 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:49 vm05.local ceph-mon[51870]: pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:49 vm09.local ceph-mon[54524]: pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:51 vm05.local ceph-mon[61345]: pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:51 vm05.local ceph-mon[51870]: pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:51 vm09.local ceph-mon[54524]: pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:53.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:53 vm05.local ceph-mon[61345]: pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:53.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:53 vm05.local ceph-mon[51870]: pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:53 vm09.local ceph-mon[54524]: pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:55.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:55 vm05.local ceph-mon[61345]: pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:55.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:55 vm05.local ceph-mon[51870]: pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:55 vm09.local ceph-mon[54524]: pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:56.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:43:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:43:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:57 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:57 vm05.local ceph-mon[61345]: pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:57 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:57 vm05.local ceph-mon[51870]: pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:57 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:43:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:57 vm09.local ceph-mon[54524]: pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:43:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:43:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:43:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:43:59.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:43:59 vm05.local ceph-mon[51870]: pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:43:59.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:43:59 vm05.local ceph-mon[61345]: pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:43:59 vm09.local ceph-mon[54524]: pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:00.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:00.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:00.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:00.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:00 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:44:00.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:00 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:44:00.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:44:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:44:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:44:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:00 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:44:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:01 vm05.local ceph-mon[61345]: pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:01 vm05.local ceph-mon[51870]: pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:01 vm09.local ceph-mon[54524]: pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:03 vm09.local ceph-mon[54524]: pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:03 vm05.local ceph-mon[61345]: pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:03 vm05.local ceph-mon[51870]: pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:05 vm09.local ceph-mon[54524]: pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:05 vm05.local ceph-mon[61345]: pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:05 vm05.local ceph-mon[51870]: pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:06.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:44:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:44:08.212 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:07 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:08.212 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:07 vm05.local ceph-mon[61345]: pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:08.212 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:07 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:08.212 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:07 vm05.local ceph-mon[51870]: pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:07 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:07 vm09.local ceph-mon[54524]: pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:44:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:44:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:44:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:09 vm09.local ceph-mon[54524]: pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:09 vm05.local ceph-mon[61345]: pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:09 vm05.local ceph-mon[51870]: pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:11 vm09.local ceph-mon[54524]: pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:11 vm05.local ceph-mon[61345]: pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:11 vm05.local ceph-mon[51870]: pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:13 vm09.local ceph-mon[54524]: pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:13 vm05.local ceph-mon[61345]: pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:13 vm05.local ceph-mon[51870]: pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:15 vm09.local ceph-mon[54524]: pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:15 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:15 vm05.local ceph-mon[61345]: pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:15 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:15 vm05.local ceph-mon[51870]: pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:15 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:16.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:44:16 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:44:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:17 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:17 vm09.local ceph-mon[54524]: pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:17 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:17 vm05.local ceph-mon[61345]: pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:17 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:17 vm05.local ceph-mon[51870]: pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:44:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:44:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:44:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:19 vm09.local ceph-mon[54524]: pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:20.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:19 vm05.local ceph-mon[61345]: pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:20.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:19 vm05.local ceph-mon[51870]: pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:22.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:22 vm05.local ceph-mon[61345]: pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:22.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:22 vm05.local ceph-mon[51870]: pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:22.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:22 vm09.local ceph-mon[54524]: pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:23.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:23 vm05.local ceph-mon[61345]: pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:23.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:23 vm05.local ceph-mon[51870]: pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:23.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:23 vm09.local ceph-mon[54524]: pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:25.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:25 vm05.local ceph-mon[61345]: pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:25.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:25 vm05.local ceph-mon[51870]: pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:26.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:25 vm09.local ceph-mon[54524]: pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:26.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:44:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:44:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:27 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:27.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:27 vm05.local ceph-mon[61345]: pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:27 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:27.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:27 vm05.local ceph-mon[51870]: pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:27 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:27 vm09.local ceph-mon[54524]: pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:44:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:44:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:44:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:29 vm05.local ceph-mon[61345]: pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:29 vm05.local ceph-mon[51870]: pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:29 vm09.local ceph-mon[54524]: pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:30.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:30 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:30.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:30 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:31.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:30 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:31.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:31 vm05.local ceph-mon[61345]: pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:31.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:31 vm05.local ceph-mon[51870]: pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:32.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:31 vm09.local ceph-mon[54524]: pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:33 vm05.local ceph-mon[61345]: pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:33 vm05.local ceph-mon[51870]: pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:34.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:33 vm09.local ceph-mon[54524]: pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:35 vm05.local ceph-mon[61345]: pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:35 vm05.local ceph-mon[51870]: pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:35 vm09.local ceph-mon[54524]: pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:36.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:44:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:44:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:37 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:37 vm05.local ceph-mon[61345]: pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:37 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:37 vm05.local ceph-mon[51870]: pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:37 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:37 vm09.local ceph-mon[54524]: pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:38.911 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:44:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:44:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:44:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:39 vm05.local ceph-mon[61345]: pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:39 vm05.local ceph-mon[51870]: pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:40.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:39 vm09.local ceph-mon[54524]: pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:41 vm05.local ceph-mon[61345]: pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:41 vm05.local ceph-mon[51870]: pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:41 vm09.local ceph-mon[54524]: pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:43 vm05.local ceph-mon[61345]: pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:43 vm05.local ceph-mon[51870]: pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:43 vm09.local ceph-mon[54524]: pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:46.075 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:45 vm09.local ceph-mon[54524]: pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:46.075 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:45 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:45 vm05.local ceph-mon[61345]: pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:45 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:45 vm05.local ceph-mon[51870]: pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:45 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:44:46.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:44:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:44:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:47 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:47 vm05.local ceph-mon[61345]: pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:47 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:47 vm05.local ceph-mon[51870]: pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:47 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:47 vm09.local ceph-mon[54524]: pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:44:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:44:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:44:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:49 vm05.local ceph-mon[61345]: pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:49 vm05.local ceph-mon[51870]: pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:49 vm09.local ceph-mon[54524]: pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:52 vm09.local ceph-mon[54524]: pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:52.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:52 vm05.local ceph-mon[61345]: pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:52.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:51 vm05.local ceph-mon[51870]: pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:54 vm09.local ceph-mon[54524]: pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:54 vm05.local ceph-mon[61345]: pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:54 vm05.local ceph-mon[51870]: pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:56 vm05.local ceph-mon[61345]: pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:56 vm05.local ceph-mon[51870]: pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:56.417 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:56 vm09.local ceph-mon[54524]: pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:56.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:44:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:44:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:57 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:57 vm05.local ceph-mon[61345]: pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:57 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:57 vm05.local ceph-mon[51870]: pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:57.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:57 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:44:57.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:57 vm09.local ceph-mon[54524]: pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:44:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:44:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:44:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:44:59.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:44:59 vm05.local ceph-mon[61345]: pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:44:59.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:44:59 vm05.local ceph-mon[51870]: pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:44:59 vm09.local ceph-mon[54524]: pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:45:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:45:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:00 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:45:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:45:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:45:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:00 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:45:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:45:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:45:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:45:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:00 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:45:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:01 vm05.local ceph-mon[61345]: pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:01 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:01 vm05.local ceph-mon[51870]: pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:01 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:01 vm09.local ceph-mon[54524]: pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:01 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:03 vm05.local ceph-mon[61345]: pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:03 vm05.local ceph-mon[51870]: pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:03 vm09.local ceph-mon[54524]: pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:05 vm05.local ceph-mon[61345]: pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:05 vm05.local ceph-mon[51870]: pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:05 vm09.local ceph-mon[54524]: pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:06.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:45:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:45:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:07 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:07 vm09.local ceph-mon[54524]: pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:07 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:07 vm05.local ceph-mon[61345]: pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:07 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:07 vm05.local ceph-mon[51870]: pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:45:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:45:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:45:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:09 vm09.local ceph-mon[54524]: pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:09 vm05.local ceph-mon[61345]: pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:10.422 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:09 vm05.local ceph-mon[51870]: pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:11 vm09.local ceph-mon[54524]: pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:11 vm05.local ceph-mon[61345]: pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:11 vm05.local ceph-mon[51870]: pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:13 vm09.local ceph-mon[54524]: pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:13 vm05.local ceph-mon[61345]: pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:13 vm05.local ceph-mon[51870]: pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:15 vm09.local ceph-mon[54524]: pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:15 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:15 vm05.local ceph-mon[61345]: pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:15 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:15 vm05.local ceph-mon[51870]: pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:15 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:16.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:45:16 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:45:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:17 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:17 vm09.local ceph-mon[54524]: pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:17 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:17 vm05.local ceph-mon[61345]: pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:17 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:17 vm05.local ceph-mon[51870]: pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:45:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:45:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:45:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:20 vm09.local ceph-mon[54524]: pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:20.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:20 vm05.local ceph-mon[61345]: pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:20.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:20 vm05.local ceph-mon[51870]: pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:22 vm09.local ceph-mon[54524]: pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:22.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:22 vm05.local ceph-mon[61345]: pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:22.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:22 vm05.local ceph-mon[51870]: pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:24 vm09.local ceph-mon[54524]: pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:24 vm05.local ceph-mon[61345]: pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:24 vm05.local ceph-mon[51870]: pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:26.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:26 vm05.local ceph-mon[61345]: pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:26 vm05.local ceph-mon[51870]: pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:26.446 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:26 vm09.local ceph-mon[54524]: pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:26.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:45:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:45:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:28 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:28 vm05.local ceph-mon[61345]: pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:28 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:28 vm05.local ceph-mon[51870]: pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:28.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:28 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:28.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:28 vm09.local ceph-mon[54524]: pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:45:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:45:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:45:30.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:30 vm05.local ceph-mon[61345]: pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:30.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:30 vm05.local ceph-mon[51870]: pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:30.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:30 vm09.local ceph-mon[54524]: pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:31.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:31 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:31.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:31 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:31.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:31 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:32.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:32 vm05.local ceph-mon[61345]: pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:32.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:32 vm05.local ceph-mon[51870]: pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:32.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:32 vm09.local ceph-mon[54524]: pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:33.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:33 vm05.local ceph-mon[61345]: pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:33.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:33 vm05.local ceph-mon[51870]: pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:33.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:33 vm09.local ceph-mon[54524]: pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:35.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:35 vm05.local ceph-mon[61345]: pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:35.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:35 vm05.local ceph-mon[51870]: pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:35 vm09.local ceph-mon[54524]: pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:36.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:45:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:45:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:37 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:37.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:37 vm05.local ceph-mon[61345]: pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:37 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:37.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:37 vm05.local ceph-mon[51870]: pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:37 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:38.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:37 vm09.local ceph-mon[54524]: pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:38.911 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:45:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:45:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:45:39.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:39 vm05.local ceph-mon[51870]: pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:39.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:39 vm05.local ceph-mon[61345]: pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:40.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:39 vm09.local ceph-mon[54524]: pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:41.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:41 vm05.local ceph-mon[61345]: pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:41.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:41 vm05.local ceph-mon[51870]: pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:41 vm09.local ceph-mon[54524]: pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:43.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:43 vm05.local ceph-mon[61345]: pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:43.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:43 vm05.local ceph-mon[51870]: pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:43 vm09.local ceph-mon[54524]: pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:45 vm05.local ceph-mon[61345]: pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:45.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:45 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:45 vm05.local ceph-mon[51870]: pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:45.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:45 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:45 vm09.local ceph-mon[54524]: pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:45 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:45:46.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:45:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:45:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:47 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:47.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:47 vm05.local ceph-mon[61345]: pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:47 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:47.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:47 vm05.local ceph-mon[51870]: pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:47 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:47 vm09.local ceph-mon[54524]: pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:45:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:45:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:45:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:49 vm05.local ceph-mon[61345]: pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:49 vm05.local ceph-mon[51870]: pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:49 vm09.local ceph-mon[54524]: pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:51.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:51 vm05.local ceph-mon[61345]: pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:51.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:51 vm05.local ceph-mon[51870]: pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:51 vm09.local ceph-mon[54524]: pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:53 vm09.local ceph-mon[54524]: pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:53 vm05.local ceph-mon[61345]: pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:53 vm05.local ceph-mon[51870]: pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:55 vm09.local ceph-mon[54524]: pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:55 vm05.local ceph-mon[61345]: pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:55 vm05.local ceph-mon[51870]: pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:45:56.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:45:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:45:57.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:57 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:57.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:57 vm09.local ceph-mon[54524]: pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:57 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:57 vm05.local ceph-mon[61345]: pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:57 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:45:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:57 vm05.local ceph-mon[51870]: pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:45:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:45:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:45:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:46:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:45:59 vm09.local ceph-mon[54524]: pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:45:59 vm05.local ceph-mon[61345]: pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:45:59 vm05.local ceph-mon[51870]: pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:46:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:46:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:46:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:00 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:46:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:46:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:46:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:46:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:00 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:46:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:46:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:46:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:46:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:00 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:46:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:01 vm09.local ceph-mon[54524]: pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:01 vm05.local ceph-mon[61345]: pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:01 vm05.local ceph-mon[51870]: pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:03 vm05.local ceph-mon[61345]: pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:03 vm05.local ceph-mon[51870]: pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:03 vm09.local ceph-mon[54524]: pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:05 vm09.local ceph-mon[54524]: pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:05 vm05.local ceph-mon[61345]: pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:05 vm05.local ceph-mon[51870]: pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:06.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:46:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:46:08.211 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:07 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:08.211 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:07 vm05.local ceph-mon[61345]: pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:08.211 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:07 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:08.211 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:07 vm05.local ceph-mon[51870]: pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:07 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:07 vm09.local ceph-mon[54524]: pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:46:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:46:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:46:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:10 vm05.local ceph-mon[61345]: pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:10 vm05.local ceph-mon[51870]: pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:10 vm09.local ceph-mon[54524]: pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:12 vm05.local ceph-mon[61345]: pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:12 vm05.local ceph-mon[51870]: pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:12.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:12 vm09.local ceph-mon[54524]: pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:13.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:13 vm09.local ceph-mon[54524]: pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:13.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:13 vm05.local ceph-mon[61345]: pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:13.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:13 vm05.local ceph-mon[51870]: pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:15.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:15 vm05.local ceph-mon[61345]: pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:15.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:15 vm05.local ceph-mon[51870]: pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:15 vm09.local ceph-mon[54524]: pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:16.744 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:46:16 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:46:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:16 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:16 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:16 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:17 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:17 vm09.local ceph-mon[54524]: pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:17 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:17 vm05.local ceph-mon[61345]: pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:17 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:17 vm05.local ceph-mon[51870]: pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:46:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:46:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:46:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:19 vm09.local ceph-mon[54524]: pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:19 vm05.local ceph-mon[61345]: pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:19 vm05.local ceph-mon[51870]: pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:21 vm05.local ceph-mon[61345]: pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:22.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:21 vm05.local ceph-mon[51870]: pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:21 vm09.local ceph-mon[54524]: pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:24 vm09.local ceph-mon[54524]: pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:23 vm05.local ceph-mon[61345]: pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:23 vm05.local ceph-mon[51870]: pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:26 vm09.local ceph-mon[54524]: pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:26.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:46:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:46:26.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:26 vm05.local ceph-mon[61345]: pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:26.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:26 vm05.local ceph-mon[51870]: pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:27.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:27 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:27.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:27 vm09.local ceph-mon[54524]: pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:27.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:27 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:27.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:27 vm05.local ceph-mon[61345]: pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:27 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:27.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:27 vm05.local ceph-mon[51870]: pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:46:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:46:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:46:29.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:29 vm05.local ceph-mon[61345]: pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:29.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:29 vm05.local ceph-mon[51870]: pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:29 vm09.local ceph-mon[54524]: pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:31.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:30 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:31.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:30 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:31.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:30 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:31 vm05.local ceph-mon[61345]: pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:31 vm05.local ceph-mon[51870]: pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:31 vm09.local ceph-mon[54524]: pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:34.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:33 vm05.local ceph-mon[61345]: pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:34.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:33 vm05.local ceph-mon[51870]: pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:34.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:33 vm09.local ceph-mon[54524]: pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:36.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:35 vm09.local ceph-mon[54524]: pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:35 vm05.local ceph-mon[61345]: pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:35 vm05.local ceph-mon[51870]: pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:36.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:46:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:46:38.212 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:37 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:38.212 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:37 vm05.local ceph-mon[61345]: pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:38.212 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:37 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:38.212 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:37 vm05.local ceph-mon[51870]: pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:37 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:37 vm09.local ceph-mon[54524]: pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:46:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:46:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:46:40.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:39 vm09.local ceph-mon[54524]: pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:39 vm05.local ceph-mon[61345]: pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:39 vm05.local ceph-mon[51870]: pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:42.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:41 vm09.local ceph-mon[54524]: pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:42.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:41 vm05.local ceph-mon[61345]: pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:42.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:41 vm05.local ceph-mon[51870]: pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:43 vm09.local ceph-mon[54524]: pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:44.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:43 vm05.local ceph-mon[61345]: pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:44.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:43 vm05.local ceph-mon[51870]: pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:46.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:45 vm09.local ceph-mon[54524]: pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:46.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:45 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:45 vm05.local ceph-mon[61345]: pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:45 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:45 vm05.local ceph-mon[51870]: pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:45 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:46:46.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:46:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:46:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:47 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:47 vm09.local ceph-mon[54524]: pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:47 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:47 vm05.local ceph-mon[61345]: pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:47 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:47 vm05.local ceph-mon[51870]: pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:46:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:46:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:46:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:49 vm09.local ceph-mon[54524]: pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:50.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:49 vm05.local ceph-mon[61345]: pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:50.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:49 vm05.local ceph-mon[51870]: pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:52.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:52 vm05.local ceph-mon[61345]: pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:52.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:52 vm05.local ceph-mon[51870]: pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:52.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:52 vm09.local ceph-mon[54524]: pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:54 vm05.local ceph-mon[61345]: pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:54 vm05.local ceph-mon[51870]: pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:54.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:54 vm09.local ceph-mon[54524]: pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:56 vm05.local ceph-mon[61345]: pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:56 vm05.local ceph-mon[51870]: pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:56 vm09.local ceph-mon[54524]: pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:46:56.773 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:46:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:46:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:57 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:57.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:57 vm05.local ceph-mon[61345]: pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:57 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:57.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:57 vm05.local ceph-mon[51870]: pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:57.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:57 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:46:57.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:57 vm09.local ceph-mon[54524]: pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:46:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:46:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:46:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:47:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:46:59 vm09.local ceph-mon[54524]: pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:46:59 vm05.local ceph-mon[61345]: pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:46:59 vm05.local ceph-mon[51870]: pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:47:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:01 vm09.local ceph-mon[54524]: pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:01 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:47:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:01 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:47:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:01 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:47:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:01 vm05.local ceph-mon[61345]: pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:01 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:47:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:01 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:47:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:01 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:47:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:01 vm05.local ceph-mon[51870]: pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:01 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:47:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:01 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:47:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:01 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:47:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:03 vm05.local ceph-mon[61345]: pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:03 vm05.local ceph-mon[51870]: pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:03 vm09.local ceph-mon[54524]: pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:05 vm05.local ceph-mon[61345]: pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:05 vm05.local ceph-mon[51870]: pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:05 vm09.local ceph-mon[54524]: pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:07.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:47:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:47:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:07 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:07 vm09.local ceph-mon[54524]: pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:07 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:07 vm05.local ceph-mon[61345]: pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:07 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:07 vm05.local ceph-mon[51870]: pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:47:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:47:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:47:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:09 vm09.local ceph-mon[54524]: pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:09 vm05.local ceph-mon[61345]: pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:10.418 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:09 vm05.local ceph-mon[51870]: pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:12 vm05.local ceph-mon[61345]: pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:12 vm05.local ceph-mon[51870]: pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:12.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:12 vm09.local ceph-mon[54524]: pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:13.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:13 vm09.local ceph-mon[54524]: pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:13.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:13 vm05.local ceph-mon[61345]: pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:13.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:13 vm05.local ceph-mon[51870]: pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:15 vm09.local ceph-mon[54524]: pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:15 vm05.local ceph-mon[61345]: pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:15 vm05.local ceph-mon[51870]: pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:17.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:16 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:17.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:47:16 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:47:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:16 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:16 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:17 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:17 vm09.local ceph-mon[54524]: pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:17 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:17 vm05.local ceph-mon[61345]: pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:17 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:17 vm05.local ceph-mon[51870]: pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:47:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:47:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:47:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:19 vm09.local ceph-mon[54524]: pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:19 vm05.local ceph-mon[61345]: pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:19 vm05.local ceph-mon[51870]: pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:21 vm05.local ceph-mon[61345]: pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:22.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:21 vm05.local ceph-mon[51870]: pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:21 vm09.local ceph-mon[54524]: pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:24.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:23 vm05.local ceph-mon[61345]: pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:24.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:23 vm05.local ceph-mon[51870]: pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:23 vm09.local ceph-mon[54524]: pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:25 vm05.local ceph-mon[61345]: pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:26.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:25 vm05.local ceph-mon[51870]: pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:25 vm09.local ceph-mon[54524]: pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:27.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:47:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:47:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:27 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:27 vm05.local ceph-mon[51870]: pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:27 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:27 vm05.local ceph-mon[61345]: pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:27 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:27 vm09.local ceph-mon[54524]: pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:47:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:47:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:47:30.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:29 vm05.local ceph-mon[51870]: pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:30.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:29 vm05.local ceph-mon[61345]: pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:30.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:29 vm09.local ceph-mon[54524]: pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:31.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:30 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:31.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:30 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:31.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:30 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:31 vm09.local ceph-mon[54524]: pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:32.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:31 vm05.local ceph-mon[61345]: pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:32.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:31 vm05.local ceph-mon[51870]: pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:34.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:33 vm09.local ceph-mon[54524]: pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:33 vm05.local ceph-mon[51870]: pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:33 vm05.local ceph-mon[61345]: pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:36.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:35 vm09.local ceph-mon[54524]: pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:35 vm05.local ceph-mon[51870]: pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:35 vm05.local ceph-mon[61345]: pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:37.024 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:47:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:47:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:37 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:37 vm09.local ceph-mon[54524]: pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:38.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:37 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:38.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:37 vm05.local ceph-mon[51870]: pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:38.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:37 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:38.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:37 vm05.local ceph-mon[61345]: pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:47:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:47:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:47:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:40 vm05.local ceph-mon[61345]: pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:40 vm05.local ceph-mon[51870]: pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:40.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:40 vm09.local ceph-mon[54524]: pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:42.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:42 vm05.local ceph-mon[61345]: pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:42.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:42 vm05.local ceph-mon[51870]: pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:42.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:42 vm09.local ceph-mon[54524]: pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:44.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:44 vm05.local ceph-mon[61345]: pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:44.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:44 vm05.local ceph-mon[51870]: pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:44.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:44 vm09.local ceph-mon[54524]: pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:46 vm05.local ceph-mon[61345]: pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:46 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:46 vm05.local ceph-mon[51870]: pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:46 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:46 vm09.local ceph-mon[54524]: pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:46 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:47:47.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:47:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:47:47.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:47 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:47.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:47 vm05.local ceph-mon[61345]: pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:47 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:47.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:47 vm05.local ceph-mon[51870]: pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:47.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:47 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:47.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:47 vm09.local ceph-mon[54524]: pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:47:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:47:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:47:49.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:49 vm05.local ceph-mon[61345]: pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:49.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:49 vm05.local ceph-mon[51870]: pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:49 vm09.local ceph-mon[54524]: pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:51 vm09.local ceph-mon[54524]: pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:51 vm05.local ceph-mon[61345]: pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:51 vm05.local ceph-mon[51870]: pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:53 vm05.local ceph-mon[61345]: pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:53 vm05.local ceph-mon[51870]: pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:53 vm09.local ceph-mon[54524]: pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:55 vm05.local ceph-mon[61345]: pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:55 vm05.local ceph-mon[51870]: pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:55 vm09.local ceph-mon[54524]: pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:47:57.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:47:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:47:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:57 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:57 vm05.local ceph-mon[61345]: pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:57 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:57 vm05.local ceph-mon[51870]: pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:57 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:47:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:57 vm09.local ceph-mon[54524]: pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:47:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:47:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:47:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:48:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:47:59 vm05.local ceph-mon[61345]: pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:47:59 vm05.local ceph-mon[51870]: pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:47:59 vm09.local ceph-mon[54524]: pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:00 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:00 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:01.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:00 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:01 vm09.local ceph-mon[54524]: pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:01 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:01 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:01 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:48:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:01 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:48:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:01 vm05.local ceph-mon[61345]: pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:01 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:01 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:01 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:48:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:01 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:48:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:01 vm05.local ceph-mon[51870]: pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:01 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:48:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:01 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:48:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:01 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:48:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:01 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:48:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:03 vm09.local ceph-mon[54524]: pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:03 vm05.local ceph-mon[61345]: pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:03 vm05.local ceph-mon[51870]: pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:06 vm05.local ceph-mon[61345]: pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:06 vm05.local ceph-mon[51870]: pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:06 vm09.local ceph-mon[54524]: pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:07.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:48:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:48:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:08 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:08.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:08 vm05.local ceph-mon[61345]: pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:08 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:08.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:08 vm05.local ceph-mon[51870]: pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:08 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:08 vm09.local ceph-mon[54524]: pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:48:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:48:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:48:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:10 vm05.local ceph-mon[61345]: pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:10 vm05.local ceph-mon[51870]: pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:10 vm09.local ceph-mon[54524]: pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:11 vm05.local ceph-mon[61345]: pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:11 vm05.local ceph-mon[51870]: pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:11.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:11 vm09.local ceph-mon[54524]: pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:13.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:13 vm05.local ceph-mon[61345]: pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:13.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:13 vm05.local ceph-mon[51870]: pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:13 vm09.local ceph-mon[54524]: pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:16.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:15 vm09.local ceph-mon[54524]: pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:15 vm05.local ceph-mon[61345]: pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:15 vm05.local ceph-mon[51870]: pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:16.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:16 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:16.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:48:16 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:48:17.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:16 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:17.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:16 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:17 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:18.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:17 vm09.local ceph-mon[54524]: pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:17 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:17 vm05.local ceph-mon[61345]: pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:17 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:17 vm05.local ceph-mon[51870]: pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:48:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:48:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:48:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:19 vm09.local ceph-mon[54524]: pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:19 vm05.local ceph-mon[61345]: pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:19 vm05.local ceph-mon[51870]: pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:21 vm09.local ceph-mon[54524]: pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:21 vm05.local ceph-mon[61345]: pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:22.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:21 vm05.local ceph-mon[51870]: pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:24.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:23 vm09.local ceph-mon[54524]: pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:24.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:23 vm05.local ceph-mon[61345]: pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:24.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:23 vm05.local ceph-mon[51870]: pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:25 vm05.local ceph-mon[61345]: pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:25 vm05.local ceph-mon[51870]: pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:25 vm09.local ceph-mon[54524]: pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:27.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:48:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:48:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:27 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:27 vm05.local ceph-mon[61345]: pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:28.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:27 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:28.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:27 vm05.local ceph-mon[51870]: pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:27 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:27 vm09.local ceph-mon[54524]: pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:48:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:48:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:48:30.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:29 vm05.local ceph-mon[61345]: pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:30.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:29 vm05.local ceph-mon[51870]: pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:30.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:29 vm09.local ceph-mon[54524]: pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:31.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:30 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:31.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:30 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:31.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:30 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:31 vm05.local ceph-mon[61345]: pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:31 vm05.local ceph-mon[51870]: pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:31 vm09.local ceph-mon[54524]: pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:34.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:33 vm05.local ceph-mon[61345]: pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:34.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:33 vm05.local ceph-mon[51870]: pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:34.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:33 vm09.local ceph-mon[54524]: pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:36.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:35 vm05.local ceph-mon[61345]: pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:36.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:35 vm05.local ceph-mon[51870]: pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:36.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:35 vm09.local ceph-mon[54524]: pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:37.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:48:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:48:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:37 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:37 vm05.local ceph-mon[61345]: pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:37 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:37 vm05.local ceph-mon[51870]: pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:37 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:37 vm09.local ceph-mon[54524]: pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:48:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:48:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:48:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:39 vm05.local ceph-mon[61345]: pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:39 vm05.local ceph-mon[51870]: pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:40.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:39 vm09.local ceph-mon[54524]: pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:41 vm05.local ceph-mon[61345]: pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:41 vm05.local ceph-mon[51870]: pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:42.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:41 vm09.local ceph-mon[54524]: pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:44 vm09.local ceph-mon[54524]: pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:44.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:44 vm05.local ceph-mon[61345]: pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:44.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:44 vm05.local ceph-mon[51870]: pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:46 vm05.local ceph-mon[61345]: pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:46 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:46 vm05.local ceph-mon[51870]: pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:46 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:46 vm09.local ceph-mon[54524]: pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:46 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:48:47.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:48:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:48:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:48 vm05.local ceph-mon[61345]: pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:48 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:48 vm05.local ceph-mon[51870]: pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:48 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:48.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:48 vm09.local ceph-mon[54524]: pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:48.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:48 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:48:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:48:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:48:50.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:50 vm05.local ceph-mon[61345]: pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:50.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:50 vm05.local ceph-mon[51870]: pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:50.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:50 vm09.local ceph-mon[54524]: pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:52.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:52 vm05.local ceph-mon[61345]: pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:52.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:52 vm05.local ceph-mon[51870]: pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:52.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:52 vm09.local ceph-mon[54524]: pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:54 vm05.local ceph-mon[61345]: pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:54 vm05.local ceph-mon[51870]: pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:54.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:54 vm09.local ceph-mon[54524]: pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:56 vm05.local ceph-mon[61345]: pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:56 vm05.local ceph-mon[51870]: pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:56.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:56 vm09.local ceph-mon[54524]: pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:48:57.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:48:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:48:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:58 vm05.local ceph-mon[61345]: pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:58.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:48:58 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:58 vm05.local ceph-mon[51870]: pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:58.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:48:58 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:58 vm09.local ceph-mon[54524]: pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:48:58.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:48:58 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:48:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:48:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:48:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:49:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:00 vm05.local ceph-mon[61345]: pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:00 vm05.local ceph-mon[51870]: pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:00.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:00 vm09.local ceph-mon[54524]: pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:01 vm05.local ceph-mon[61345]: pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:01 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:01 vm05.local ceph-mon[51870]: pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:01 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:01 vm09.local ceph-mon[54524]: pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:01 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:49:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:02 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:02 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:49:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:02 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:02 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:02 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:02 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:02 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:49:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:02 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:49:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:02 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:49:03.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:03 vm09.local ceph-mon[54524]: pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:03.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:03 vm05.local ceph-mon[61345]: pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:03.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:03 vm05.local ceph-mon[51870]: pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:06.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:05 vm09.local ceph-mon[54524]: pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:05 vm05.local ceph-mon[61345]: pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:05 vm05.local ceph-mon[51870]: pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:07.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:49:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:49:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:07 vm09.local ceph-mon[54524]: pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:07 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:07 vm05.local ceph-mon[61345]: pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:07 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:07 vm05.local ceph-mon[51870]: pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:07 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:49:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:49:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:49:10.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:09 vm09.local ceph-mon[54524]: pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:09 vm05.local ceph-mon[61345]: pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:09 vm05.local ceph-mon[51870]: pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:11 vm09.local ceph-mon[54524]: pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:11 vm05.local ceph-mon[61345]: pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:11 vm05.local ceph-mon[51870]: pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:13 vm05.local ceph-mon[61345]: pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:49:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:13 vm05.local ceph-mon[51870]: pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:49:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:13 vm09.local ceph-mon[54524]: pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:49:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:15 vm05.local ceph-mon[61345]: pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:49:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:15 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:15 vm05.local ceph-mon[51870]: pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:49:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:15 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:15 vm09.local ceph-mon[54524]: pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:49:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:15 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:17.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:49:16 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:49:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:17 vm05.local ceph-mon[61345]: pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:18.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:17 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:17 vm05.local ceph-mon[51870]: pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:18.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:17 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:17 vm09.local ceph-mon[54524]: pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:17 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:49:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:49:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:49:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:19 vm05.local ceph-mon[61345]: pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:49:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:19 vm05.local ceph-mon[51870]: pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:49:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:19 vm09.local ceph-mon[54524]: pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T20:49:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:21 vm09.local ceph-mon[54524]: pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:22.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:21 vm05.local ceph-mon[61345]: pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:22.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:21 vm05.local ceph-mon[51870]: pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:24 vm09.local ceph-mon[54524]: pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:24 vm05.local ceph-mon[61345]: pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:24 vm05.local ceph-mon[51870]: pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:26 vm09.local ceph-mon[54524]: pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:26.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:26 vm05.local ceph-mon[61345]: pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:26 vm05.local ceph-mon[51870]: pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:27.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:49:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:49:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:28 vm09.local ceph-mon[54524]: pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:28 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:28 vm05.local ceph-mon[61345]: pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:28 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:28 vm05.local ceph-mon[51870]: pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:28 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:49:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:49:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:49:30.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:30 vm05.local ceph-mon[61345]: pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:30.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:30 vm05.local ceph-mon[51870]: pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:30.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:30 vm09.local ceph-mon[54524]: pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:31.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:31 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:31.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:31 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:31.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:31 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:32.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:32 vm05.local ceph-mon[61345]: pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:32.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:32 vm05.local ceph-mon[51870]: pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:32.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:32 vm09.local ceph-mon[54524]: pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:33.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:33 vm05.local ceph-mon[61345]: pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:33.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:33 vm05.local ceph-mon[51870]: pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:33.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:33 vm09.local ceph-mon[54524]: pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:35 vm09.local ceph-mon[54524]: pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:36.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:35 vm05.local ceph-mon[61345]: pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:36.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:35 vm05.local ceph-mon[51870]: pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:37.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:49:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:49:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:37 vm05.local ceph-mon[61345]: pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:38.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:37 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:37 vm05.local ceph-mon[51870]: pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:38.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:37 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:37 vm09.local ceph-mon[54524]: pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:37 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:49:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:49:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:49:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:39 vm05.local ceph-mon[61345]: pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:39 vm05.local ceph-mon[51870]: pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:40.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:39 vm09.local ceph-mon[54524]: pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:41 vm05.local ceph-mon[61345]: pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:41 vm05.local ceph-mon[51870]: pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:42.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:41 vm09.local ceph-mon[54524]: pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:43 vm05.local ceph-mon[61345]: pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:43 vm05.local ceph-mon[51870]: pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:43 vm09.local ceph-mon[54524]: pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:46.075 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:45 vm09.local ceph-mon[54524]: pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:46.075 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:45 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:45 vm05.local ceph-mon[61345]: pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:45 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:45 vm05.local ceph-mon[51870]: pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:45 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:49:47.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:49:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:49:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:47 vm05.local ceph-mon[61345]: pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:47 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:47 vm05.local ceph-mon[51870]: pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:47 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:47 vm09.local ceph-mon[54524]: pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:47 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:49:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:49:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:49:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:49 vm05.local ceph-mon[61345]: pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:49 vm05.local ceph-mon[51870]: pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:49 vm09.local ceph-mon[54524]: pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:51 vm05.local ceph-mon[61345]: pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:51 vm05.local ceph-mon[51870]: pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:51 vm09.local ceph-mon[54524]: pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:54 vm05.local ceph-mon[61345]: pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:53 vm05.local ceph-mon[51870]: pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:54.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:54 vm09.local ceph-mon[54524]: pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:55 vm09.local ceph-mon[54524]: pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:55 vm05.local ceph-mon[61345]: pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:55 vm05.local ceph-mon[51870]: pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:49:57.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:49:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:49:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:57 vm05.local ceph-mon[61345]: pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:57 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:57 vm05.local ceph-mon[51870]: pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:57 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:57 vm09.local ceph-mon[54524]: pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:49:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:57 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:49:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:49:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:49:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:50:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:49:59 vm09.local ceph-mon[54524]: pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:49:59 vm05.local ceph-mon[61345]: pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:49:59 vm05.local ceph-mon[51870]: pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:01.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:00 vm09.local ceph-mon[54524]: overall HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T20:50:01.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:00 vm05.local ceph-mon[61345]: overall HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T20:50:01.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:00 vm05.local ceph-mon[51870]: overall HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-09T20:50:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:01 vm09.local ceph-mon[54524]: pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:01 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:02.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:01 vm05.local ceph-mon[61345]: pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:02.161 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:01 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:02.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:01 vm05.local ceph-mon[51870]: pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:02.161 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:01 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:02.911 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:02 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:50:02.911 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:02 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:50:03.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:02 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:50:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:04 vm05.local ceph-mon[51870]: pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:04 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:50:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:04 vm05.local ceph-mon[61345]: pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:04 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:50:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:04 vm09.local ceph-mon[54524]: pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:04.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:04 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:50:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:05 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:50:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:05 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:50:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:05 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:50:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:05 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:50:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:05 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:50:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:05 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:50:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:05 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:50:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:05 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:50:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:05 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:50:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:05 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:50:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:05 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:50:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:05 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:50:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:06 vm05.local ceph-mon[61345]: pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:06 vm05.local ceph-mon[51870]: pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:06.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:06 vm09.local ceph-mon[54524]: pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:07.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:50:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:50:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:08 vm09.local ceph-mon[54524]: pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:08 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:08.550 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:08 vm05.local ceph-mon[61345]: pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:08.550 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:08 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:08.550 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:08 vm05.local ceph-mon[51870]: pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:08.550 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:08 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:08.911 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:50:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:50:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:50:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:09 vm05.local ceph-mon[61345]: pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:09.411 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:09 vm05.local ceph-mon[51870]: pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:09.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:09 vm09.local ceph-mon[54524]: pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:12.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:11 vm09.local ceph-mon[54524]: pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:11 vm05.local ceph-mon[61345]: pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:11 vm05.local ceph-mon[51870]: pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:14.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:13 vm09.local ceph-mon[54524]: pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:13 vm05.local ceph-mon[61345]: pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:13 vm05.local ceph-mon[51870]: pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:15 vm09.local ceph-mon[54524]: pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:15 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:15 vm05.local ceph-mon[61345]: pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:15 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:15 vm05.local ceph-mon[51870]: pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:15 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:17.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:50:16 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:50:18.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:18 vm05.local ceph-mon[61345]: pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:18.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:18 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:18.633 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:18 vm05.local ceph-mon[51870]: pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:18.633 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:18 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:18.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:18 vm09.local ceph-mon[54524]: pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:18.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:18 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:50:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:50:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:50:19.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:19 vm05.local ceph-mon[61345]: pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:19.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:19 vm05.local ceph-mon[51870]: pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:19.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:19 vm09.local ceph-mon[54524]: pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:21 vm09.local ceph-mon[54524]: pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:21 vm05.local ceph-mon[61345]: pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:22.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:21 vm05.local ceph-mon[51870]: pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:24.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:23 vm05.local ceph-mon[61345]: pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:24.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:23 vm05.local ceph-mon[51870]: pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:23 vm09.local ceph-mon[54524]: pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:25 vm05.local ceph-mon[61345]: pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:25 vm05.local ceph-mon[51870]: pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:25 vm09.local ceph-mon[54524]: pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:27.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:50:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:50:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:27 vm05.local ceph-mon[61345]: pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:27 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:27 vm05.local ceph-mon[51870]: pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:27 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:27 vm09.local ceph-mon[54524]: pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:27 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:50:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:50:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:50:30.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:29 vm05.local ceph-mon[61345]: pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:30.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:29 vm05.local ceph-mon[51870]: pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:30.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:29 vm09.local ceph-mon[54524]: pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:31.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:30 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:31.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:30 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:31.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:30 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:31 vm05.local ceph-mon[51870]: pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:31 vm05.local ceph-mon[61345]: pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:31 vm09.local ceph-mon[54524]: pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:34.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:33 vm05.local ceph-mon[61345]: pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:34.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:33 vm05.local ceph-mon[51870]: pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:34.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:33 vm09.local ceph-mon[54524]: pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:36.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:35 vm05.local ceph-mon[61345]: pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:36.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:35 vm05.local ceph-mon[51870]: pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:36.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:35 vm09.local ceph-mon[54524]: pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:37.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:50:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:50:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:37 vm09.local ceph-mon[54524]: pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:37 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:38.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:37 vm05.local ceph-mon[61345]: pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:38.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:37 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:38.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:37 vm05.local ceph-mon[51870]: pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:38.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:37 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:50:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:50:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:50:40.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:39 vm09.local ceph-mon[54524]: pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:39 vm05.local ceph-mon[61345]: pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:39 vm05.local ceph-mon[51870]: pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:42.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:41 vm09.local ceph-mon[54524]: pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:42.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:41 vm05.local ceph-mon[61345]: pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:42.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:41 vm05.local ceph-mon[51870]: pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:43 vm09.local ceph-mon[54524]: pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:44.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:43 vm05.local ceph-mon[61345]: pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:44.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:43 vm05.local ceph-mon[51870]: pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:46.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:45 vm09.local ceph-mon[54524]: pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:46.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:45 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:45 vm05.local ceph-mon[61345]: pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:45 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:45 vm05.local ceph-mon[51870]: pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:45 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:50:47.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:50:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:50:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:47 vm09.local ceph-mon[54524]: pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:47 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:47 vm05.local ceph-mon[61345]: pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:47 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:47 vm05.local ceph-mon[51870]: pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-09T20:50:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:47 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:50:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:50:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:50:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:49 vm09.local ceph-mon[54524]: pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:50.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:49 vm05.local ceph-mon[61345]: pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:50.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:49 vm05.local ceph-mon[51870]: pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:52 vm09.local ceph-mon[54524]: pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:52.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:52 vm05.local ceph-mon[61345]: pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:52.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:52 vm05.local ceph-mon[51870]: pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:54 vm09.local ceph-mon[54524]: pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:54.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:54 vm05.local ceph-mon[61345]: pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:54.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:54 vm05.local ceph-mon[51870]: pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:56 vm09.local ceph-mon[54524]: pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:56.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:56 vm05.local ceph-mon[61345]: pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:56.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:56 vm05.local ceph-mon[51870]: pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:50:57.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:50:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:50:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:57 vm05.local ceph-mon[61345]: pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:57 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:57 vm05.local ceph-mon[51870]: pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:57 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:57 vm09.local ceph-mon[54524]: pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:50:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:57 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:50:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:50:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:50:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:50:59.273 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:50:58 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=cleanup t=2026-03-09T20:50:58.786455272Z level=info msg="Completed cleanup jobs" duration=2.752553ms 2026-03-09T20:50:59.273 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:50:58 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=plugins.update.checker t=2026-03-09T20:50:58.951114694Z level=info msg="Update check succeeded" duration=56.441365ms 2026-03-09T20:51:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:50:59 vm09.local ceph-mon[54524]: pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:50:59 vm05.local ceph-mon[61345]: pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:50:59 vm05.local ceph-mon[51870]: pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:01 vm09.local ceph-mon[54524]: pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:01 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:01 vm05.local ceph-mon[61345]: pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:01 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:01 vm05.local ceph-mon[51870]: pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:01 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:04.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:03 vm09.local ceph-mon[54524]: pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:03 vm05.local ceph-mon[61345]: pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:03 vm05.local ceph-mon[51870]: pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:04.990 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:04 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:51:04.990 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:04 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:04.990 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:04 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:04.990 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:04 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:51:04.990 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:04 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:51:04.990 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:04 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:04.990 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:04 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:05.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:04 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:06.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:05 vm09.local ceph-mon[54524]: pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:05 vm05.local ceph-mon[51870]: pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:05 vm05.local ceph-mon[61345]: pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:06 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:06 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:06 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:51:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:06 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:51:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:06 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:51:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:06 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:51:07.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:06 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:07.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:51:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:51:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:06 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:51:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:07 vm05.local ceph-mon[61345]: pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:07 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:07 vm05.local ceph-mon[51870]: pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:07 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:07 vm09.local ceph-mon[54524]: pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:07 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:51:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:51:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:51:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:09 vm09.local ceph-mon[54524]: pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:09 vm05.local ceph-mon[61345]: pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:09 vm05.local ceph-mon[51870]: pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:11 vm09.local ceph-mon[54524]: pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:11 vm05.local ceph-mon[61345]: pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:11 vm05.local ceph-mon[51870]: pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:13 vm09.local ceph-mon[54524]: pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:13 vm05.local ceph-mon[61345]: pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:13 vm05.local ceph-mon[51870]: pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:16.247 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:15 vm09.local ceph-mon[54524]: pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:16.247 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:15 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:15 vm05.local ceph-mon[61345]: pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:16.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:15 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:15 vm05.local ceph-mon[51870]: pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:16.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:15 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:17.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:51:16 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:51:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:17 vm09.local ceph-mon[54524]: pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:18.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:17 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:17 vm05.local ceph-mon[61345]: pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:17 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:17 vm05.local ceph-mon[51870]: pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:17 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:18.829 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:51:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:51:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:51:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:19 vm09.local ceph-mon[54524]: pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:20.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:19 vm05.local ceph-mon[51870]: pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:20.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:19 vm05.local ceph-mon[61345]: pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:21 vm09.local ceph-mon[54524]: pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:22.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:21 vm05.local ceph-mon[61345]: pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:22.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:21 vm05.local ceph-mon[51870]: pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:23 vm09.local ceph-mon[54524]: pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:23 vm05.local ceph-mon[61345]: pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:23 vm05.local ceph-mon[51870]: pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:25 vm09.local ceph-mon[54524]: pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:26.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:25 vm05.local ceph-mon[61345]: pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:25 vm05.local ceph-mon[51870]: pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:26.876 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:51:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:51:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:28 vm05.local ceph-mon[61345]: pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:28 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:28 vm05.local ceph-mon[51870]: pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:28 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:28.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:28 vm09.local ceph-mon[54524]: pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:28.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:28 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:51:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:51:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:51:29.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:29 vm09.local ceph-mon[54524]: pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:29 vm05.local ceph-mon[61345]: pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:29 vm05.local ceph-mon[51870]: pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:31.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:31 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:31.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:30 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:31.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:31 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:31 vm09.local ceph-mon[54524]: pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:32.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:31 vm05.local ceph-mon[61345]: pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:32.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:31 vm05.local ceph-mon[51870]: pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:34 vm05.local ceph-mon[61345]: pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:34 vm05.local ceph-mon[51870]: pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:34.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:34 vm09.local ceph-mon[54524]: pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:35.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:35 vm05.local ceph-mon[61345]: pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:35.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:35 vm05.local ceph-mon[51870]: pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:35.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:35 vm09.local ceph-mon[54524]: pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:37.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:51:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:51:38.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:37 vm05.local ceph-mon[61345]: pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:38.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:37 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:38.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:37 vm05.local ceph-mon[51870]: pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:38.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:37 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:37 vm09.local ceph-mon[54524]: pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:37 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:51:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:51:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:51:40.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:40 vm05.local ceph-mon[61345]: pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:40.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:40 vm05.local ceph-mon[51870]: pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:40.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:40 vm09.local ceph-mon[54524]: pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:41.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:41 vm05.local ceph-mon[61345]: pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:41.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:41 vm05.local ceph-mon[51870]: pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:41.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:41 vm09.local ceph-mon[54524]: pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:43 vm05.local ceph-mon[61345]: pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:43 vm05.local ceph-mon[51870]: pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:43 vm09.local ceph-mon[54524]: pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:46 vm05.local ceph-mon[61345]: pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:46 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:46 vm05.local ceph-mon[51870]: pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:46 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:46 vm09.local ceph-mon[54524]: pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:46 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:51:47.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:51:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:51:48.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:48 vm09.local ceph-mon[54524]: pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:48.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:48 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:48.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:48 vm05.local ceph-mon[61345]: pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:48.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:48 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:48.633 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:48 vm05.local ceph-mon[51870]: pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:48.633 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:48 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:51:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:51:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:51:49.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:49 vm09.local ceph-mon[54524]: pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:49.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:49 vm05.local ceph-mon[61345]: pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:49.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:49 vm05.local ceph-mon[51870]: pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:52.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:51 vm09.local ceph-mon[54524]: pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:51 vm05.local ceph-mon[61345]: pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:51 vm05.local ceph-mon[51870]: pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:53.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:52 vm09.local ceph-mon[54524]: osdmap e739: 8 total, 8 up, 8 in 2026-03-09T20:51:53.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:52 vm05.local ceph-mon[61345]: osdmap e739: 8 total, 8 up, 8 in 2026-03-09T20:51:53.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:52 vm05.local ceph-mon[51870]: osdmap e739: 8 total, 8 up, 8 in 2026-03-09T20:51:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:53 vm09.local ceph-mon[54524]: pgmap v1617: 196 pgs: 196 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:51:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:53 vm09.local ceph-mon[54524]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:51:54.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:53 vm09.local ceph-mon[54524]: osdmap e740: 8 total, 8 up, 8 in 2026-03-09T20:51:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:53 vm05.local ceph-mon[61345]: pgmap v1617: 196 pgs: 196 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:51:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:53 vm05.local ceph-mon[61345]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:51:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:53 vm05.local ceph-mon[61345]: osdmap e740: 8 total, 8 up, 8 in 2026-03-09T20:51:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:53 vm05.local ceph-mon[51870]: pgmap v1617: 196 pgs: 196 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:51:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:53 vm05.local ceph-mon[51870]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:51:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:53 vm05.local ceph-mon[51870]: osdmap e740: 8 total, 8 up, 8 in 2026-03-09T20:51:55.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:54 vm09.local ceph-mon[54524]: osdmap e741: 8 total, 8 up, 8 in 2026-03-09T20:51:55.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:54 vm05.local ceph-mon[61345]: osdmap e741: 8 total, 8 up, 8 in 2026-03-09T20:51:55.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:54 vm05.local ceph-mon[51870]: osdmap e741: 8 total, 8 up, 8 in 2026-03-09T20:51:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:55 vm09.local ceph-mon[54524]: pgmap v1620: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:56.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:55 vm09.local ceph-mon[54524]: osdmap e742: 8 total, 8 up, 8 in 2026-03-09T20:51:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:55 vm05.local ceph-mon[61345]: pgmap v1620: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:55 vm05.local ceph-mon[61345]: osdmap e742: 8 total, 8 up, 8 in 2026-03-09T20:51:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:55 vm05.local ceph-mon[51870]: pgmap v1620: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:51:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:55 vm05.local ceph-mon[51870]: osdmap e742: 8 total, 8 up, 8 in 2026-03-09T20:51:57.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:56 vm09.local ceph-mon[54524]: osdmap e743: 8 total, 8 up, 8 in 2026-03-09T20:51:57.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:51:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:51:57.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:56 vm05.local ceph-mon[61345]: osdmap e743: 8 total, 8 up, 8 in 2026-03-09T20:51:57.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:56 vm05.local ceph-mon[51870]: osdmap e743: 8 total, 8 up, 8 in 2026-03-09T20:51:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:57 vm05.local ceph-mon[61345]: pgmap v1623: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:57 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:57 vm05.local ceph-mon[61345]: osdmap e744: 8 total, 8 up, 8 in 2026-03-09T20:51:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:57 vm05.local ceph-mon[51870]: pgmap v1623: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:57 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:57 vm05.local ceph-mon[51870]: osdmap e744: 8 total, 8 up, 8 in 2026-03-09T20:51:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:57 vm09.local ceph-mon[54524]: pgmap v1623: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:51:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:57 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:51:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:57 vm09.local ceph-mon[54524]: osdmap e744: 8 total, 8 up, 8 in 2026-03-09T20:51:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:51:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:51:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: Running main() from gmock_main.cc 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [==========] Running 2 tests from 1 test suite. 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [----------] Global test environment set-up. 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [----------] 2 tests from NeoRadosWatchNotify 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [ RUN ] NeoRadosWatchNotify.WatchNotify 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: handle_notify cookie 94590701908304 notify_id 3169685864451 notifier_gid 24896 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [ OK ] NeoRadosWatchNotify.WatchNotify (1800932 ms) 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [ RUN ] NeoRadosWatchNotify.WatchNotifyTimeout 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: Trying... 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: handle_notify cookie 94590702997600 notify_id 3182570766337 notifier_gid 50350 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: Waiting for 3.000000000s 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: Timed out. 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: Flushing... 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: Flushed... 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [ OK ] NeoRadosWatchNotify.WatchNotifyTimeout (7203 ms) 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [----------] 2 tests from NeoRadosWatchNotify (1808135 ms total) 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [----------] Global test environment tear-down 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [==========] 2 tests from 1 test suite ran. (1808135 ms total) 2026-03-09T20:51:58.930 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [ PASSED ] 2 tests. 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94589 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94589 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95025 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95025 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95337 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95337 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95120 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95120 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95466 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95466 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94964 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94964 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94471 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94471 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95549 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95549 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.931 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94852 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94852 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94218 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94218 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94275 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94275 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94779 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94779 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94319 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94319 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94889 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94889 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=94940 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 94940 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95284 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95284 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=95648 2026-03-09T20:51:58.932 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 95648 2026-03-09T20:51:59.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:59 vm09.local ceph-mon[54524]: osdmap e745: 8 total, 8 up, 8 in 2026-03-09T20:51:59.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:59 vm05.local ceph-mon[61345]: osdmap e745: 8 total, 8 up, 8 in 2026-03-09T20:51:59.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:58 vm05.local ceph-mon[51870]: osdmap e745: 8 total, 8 up, 8 in 2026-03-09T20:52:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:59 vm09.local ceph-mon[54524]: pgmap v1626: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:59 vm09.local ceph-mon[54524]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:52:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:59 vm09.local ceph-mon[54524]: osdmap e746: 8 total, 8 up, 8 in 2026-03-09T20:52:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:51:59 vm09.local ceph-mon[54524]: osdmap e747: 8 total, 8 up, 8 in 2026-03-09T20:52:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:59 vm05.local ceph-mon[61345]: pgmap v1626: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:59 vm05.local ceph-mon[61345]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:52:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:59 vm05.local ceph-mon[61345]: osdmap e746: 8 total, 8 up, 8 in 2026-03-09T20:52:00.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:51:59 vm05.local ceph-mon[61345]: osdmap e747: 8 total, 8 up, 8 in 2026-03-09T20:52:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:59 vm05.local ceph-mon[51870]: pgmap v1626: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:59 vm05.local ceph-mon[51870]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:52:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:59 vm05.local ceph-mon[51870]: osdmap e746: 8 total, 8 up, 8 in 2026-03-09T20:52:00.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:51:59 vm05.local ceph-mon[51870]: osdmap e747: 8 total, 8 up, 8 in 2026-03-09T20:52:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:01 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:01.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:01 vm05.local ceph-mon[61345]: osdmap e748: 8 total, 8 up, 8 in 2026-03-09T20:52:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:01 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:01.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:01 vm05.local ceph-mon[51870]: osdmap e748: 8 total, 8 up, 8 in 2026-03-09T20:52:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:01 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:01.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:01 vm09.local ceph-mon[54524]: osdmap e748: 8 total, 8 up, 8 in 2026-03-09T20:52:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:02 vm05.local ceph-mon[61345]: pgmap v1629: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:52:02.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:02 vm05.local ceph-mon[61345]: osdmap e749: 8 total, 8 up, 8 in 2026-03-09T20:52:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:02 vm05.local ceph-mon[51870]: pgmap v1629: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:52:02.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:02 vm05.local ceph-mon[51870]: osdmap e749: 8 total, 8 up, 8 in 2026-03-09T20:52:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:02 vm09.local ceph-mon[54524]: pgmap v1629: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:52:02.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:02 vm09.local ceph-mon[54524]: osdmap e749: 8 total, 8 up, 8 in 2026-03-09T20:52:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:03 vm09.local ceph-mon[54524]: pgmap v1632: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:52:04.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:03 vm09.local ceph-mon[54524]: osdmap e750: 8 total, 8 up, 8 in 2026-03-09T20:52:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:03 vm05.local ceph-mon[61345]: pgmap v1632: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:52:04.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:03 vm05.local ceph-mon[61345]: osdmap e750: 8 total, 8 up, 8 in 2026-03-09T20:52:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:03 vm05.local ceph-mon[51870]: pgmap v1632: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-09T20:52:04.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:03 vm05.local ceph-mon[51870]: osdmap e750: 8 total, 8 up, 8 in 2026-03-09T20:52:05.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:04 vm09.local ceph-mon[54524]: osdmap e751: 8 total, 8 up, 8 in 2026-03-09T20:52:05.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:04 vm05.local ceph-mon[61345]: osdmap e751: 8 total, 8 up, 8 in 2026-03-09T20:52:05.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:04 vm05.local ceph-mon[51870]: osdmap e751: 8 total, 8 up, 8 in 2026-03-09T20:52:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:06 vm09.local ceph-mon[54524]: pgmap v1635: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:52:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:06 vm09.local ceph-mon[54524]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:52:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:06 vm09.local ceph-mon[54524]: osdmap e752: 8 total, 8 up, 8 in 2026-03-09T20:52:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:06 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:06 vm05.local ceph-mon[61345]: pgmap v1635: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:52:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:06 vm05.local ceph-mon[61345]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:52:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:06 vm05.local ceph-mon[61345]: osdmap e752: 8 total, 8 up, 8 in 2026-03-09T20:52:06.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:06 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:06 vm05.local ceph-mon[51870]: pgmap v1635: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:52:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:06 vm05.local ceph-mon[51870]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:52:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:06 vm05.local ceph-mon[51870]: osdmap e752: 8 total, 8 up, 8 in 2026-03-09T20:52:06.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:06 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:52:07.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:52:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:52:07.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:07 vm05.local ceph-mon[61345]: osdmap e753: 8 total, 8 up, 8 in 2026-03-09T20:52:07.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:07 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:07.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:07 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:07.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:07 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:52:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:07 vm05.local ceph-mon[51870]: osdmap e753: 8 total, 8 up, 8 in 2026-03-09T20:52:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:07 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:07 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:07.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:07 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:52:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:07 vm09.local ceph-mon[54524]: osdmap e753: 8 total, 8 up, 8 in 2026-03-09T20:52:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:07 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:52:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:07 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:52:07.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:07 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:52:08.311 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:08 vm05.local ceph-mon[61345]: pgmap v1638: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:08.311 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:08 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:08.311 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:08 vm05.local ceph-mon[61345]: osdmap e754: 8 total, 8 up, 8 in 2026-03-09T20:52:08.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:08 vm05.local ceph-mon[51870]: pgmap v1638: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:08.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:08 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:08.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:08 vm05.local ceph-mon[51870]: osdmap e754: 8 total, 8 up, 8 in 2026-03-09T20:52:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:08 vm09.local ceph-mon[54524]: pgmap v1638: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:08 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:08.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:08 vm09.local ceph-mon[54524]: osdmap e754: 8 total, 8 up, 8 in 2026-03-09T20:52:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:52:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:52:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:52:09.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:09 vm05.local ceph-mon[61345]: osdmap e755: 8 total, 8 up, 8 in 2026-03-09T20:52:09.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:09 vm05.local ceph-mon[51870]: osdmap e755: 8 total, 8 up, 8 in 2026-03-09T20:52:09.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:09 vm09.local ceph-mon[54524]: osdmap e755: 8 total, 8 up, 8 in 2026-03-09T20:52:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:10 vm05.local ceph-mon[61345]: pgmap v1641: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:10.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:10 vm05.local ceph-mon[61345]: osdmap e756: 8 total, 8 up, 8 in 2026-03-09T20:52:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:10 vm05.local ceph-mon[51870]: pgmap v1641: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:10.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:10 vm05.local ceph-mon[51870]: osdmap e756: 8 total, 8 up, 8 in 2026-03-09T20:52:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:10 vm09.local ceph-mon[54524]: pgmap v1641: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:10.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:10 vm09.local ceph-mon[54524]: osdmap e756: 8 total, 8 up, 8 in 2026-03-09T20:52:11.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:11 vm05.local ceph-mon[61345]: osdmap e757: 8 total, 8 up, 8 in 2026-03-09T20:52:11.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:11 vm05.local ceph-mon[51870]: osdmap e757: 8 total, 8 up, 8 in 2026-03-09T20:52:11.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:11 vm09.local ceph-mon[54524]: osdmap e757: 8 total, 8 up, 8 in 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: Running main() from gmock_main.cc 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [==========] Running 7 tests from 1 test suite. 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [----------] Global test environment set-up. 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [----------] 7 tests from NeoRadosWriteOps 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.AssertExists 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.AssertExists (1801966 ms) 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.AssertVersion 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.AssertVersion (3012 ms) 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Xattrs 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.Xattrs (3172 ms) 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Write 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.Write (2997 ms) 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Exec 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.Exec (3033 ms) 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.WriteSame 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.WriteSame (3078 ms) 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.CmpExt 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.CmpExt (4038 ms) 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [----------] 7 tests from NeoRadosWriteOps (1821296 ms total) 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [----------] Global test environment tear-down 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [==========] 7 tests from 1 test suite ran. (1821296 ms total) 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ PASSED ] 7 tests. 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stderr:+ exit 0 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stderr:+ cleanup 2026-03-09T20:52:12.096 INFO:tasks.workunit.client.0.vm05.stderr:+ pkill -P 94212 2026-03-09T20:52:12.104 INFO:tasks.workunit.client.0.vm05.stderr:+ true 2026-03-09T20:52:12.104 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T20:52:12.104 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-09T20:52:12.138 INFO:tasks.workunit:Running workunits matching rados/test_pool_quota.sh on client.0... 2026-03-09T20:52:12.138 INFO:tasks.workunit:Running workunit rados/test_pool_quota.sh... 2026-03-09T20:52:12.138 DEBUG:teuthology.orchestra.run.vm05:workunit test rados/test_pool_quota.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_pool_quota.sh 2026-03-09T20:52:12.198 INFO:tasks.workunit.client.0.vm05.stderr:++ uuidgen 2026-03-09T20:52:12.200 INFO:tasks.workunit.client.0.vm05.stderr:+ p=27ffa175-ba53-4b7b-afd8-5d830c8341ae 2026-03-09T20:52:12.200 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool create 27ffa175-ba53-4b7b-afd8-5d830c8341ae 12 2026-03-09T20:52:12.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.257+0000 7f9d4807c640 1 -- 192.168.123.105:0/2657077229 >> v1:192.168.123.105:6789/0 conn(0x7f9d40111370 legacy=0x7f9d40113810 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:12.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.258+0000 7f9d4807c640 1 -- 192.168.123.105:0/2657077229 shutdown_connections 2026-03-09T20:52:12.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.258+0000 7f9d4807c640 1 -- 192.168.123.105:0/2657077229 >> 192.168.123.105:0/2657077229 conn(0x7f9d401005f0 msgr2=0x7f9d40102a10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:12.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.258+0000 7f9d4807c640 1 -- 192.168.123.105:0/2657077229 shutdown_connections 2026-03-09T20:52:12.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.258+0000 7f9d4807c640 1 -- 192.168.123.105:0/2657077229 wait complete. 2026-03-09T20:52:12.259 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.258+0000 7f9d4807c640 1 Processor -- start 2026-03-09T20:52:12.259 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.258+0000 7f9d4807c640 1 -- start start 2026-03-09T20:52:12.259 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.258+0000 7f9d4807c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9d40111120 con 0x7f9d4010d7a0 2026-03-09T20:52:12.259 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.258+0000 7f9d4807c640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9d401acda0 con 0x7f9d40111370 2026-03-09T20:52:12.259 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.258+0000 7f9d4807c640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9d401adf80 con 0x7f9d4010a900 2026-03-09T20:52:12.259 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.258+0000 7f9d455f0640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f9d4010d7a0 0x7f9d4010e7b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:55926/0 (socket says 192.168.123.105:55926) 2026-03-09T20:52:12.259 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.258+0000 7f9d455f0640 1 -- 192.168.123.105:0/390840208 learned_addr learned my addr 192.168.123.105:0/390840208 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:52:12.260 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.259+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3857553940 0 0) 0x7f9d40111120 con 0x7f9d4010d7a0 2026-03-09T20:52:12.260 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.259+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9d18003620 con 0x7f9d4010d7a0 2026-03-09T20:52:12.260 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.259+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1866628680 0 0) 0x7f9d401adf80 con 0x7f9d4010a900 2026-03-09T20:52:12.260 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.259+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9d40111120 con 0x7f9d4010a900 2026-03-09T20:52:12.260 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.259+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3775771414 0 0) 0x7f9d401acda0 con 0x7f9d40111370 2026-03-09T20:52:12.260 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.259+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9d401adf80 con 0x7f9d40111370 2026-03-09T20:52:12.260 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.259+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 826502965 0 0) 0x7f9d18003620 con 0x7f9d4010d7a0 2026-03-09T20:52:12.260 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.259+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f9d401acda0 con 0x7f9d4010d7a0 2026-03-09T20:52:12.260 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.260+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f9d30003120 con 0x7f9d4010d7a0 2026-03-09T20:52:12.260 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.260+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1254981612 0 0) 0x7f9d40111120 con 0x7f9d4010a900 2026-03-09T20:52:12.260 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.260+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f9d18003620 con 0x7f9d4010a900 2026-03-09T20:52:12.260 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.260+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2595933328 0 0) 0x7f9d401acda0 con 0x7f9d4010d7a0 2026-03-09T20:52:12.261 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.260+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 >> v1:192.168.123.105:6790/0 conn(0x7f9d4010a900 legacy=0x7f9d4010e0a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:12.261 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.260+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 >> v1:192.168.123.109:6789/0 conn(0x7f9d40111370 legacy=0x7f9d401aa670 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:12.261 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.260+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9d401af160 con 0x7f9d4010d7a0 2026-03-09T20:52:12.261 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.260+0000 7f9d4807c640 1 -- 192.168.123.105:0/390840208 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f9d401abdf0 con 0x7f9d4010d7a0 2026-03-09T20:52:12.261 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.260+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f9d30003460 con 0x7f9d4010d7a0 2026-03-09T20:52:12.261 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.261+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f9d30005bf0 con 0x7f9d4010d7a0 2026-03-09T20:52:12.263 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.261+0000 7f9d4807c640 1 -- 192.168.123.105:0/390840208 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f9d401ac360 con 0x7f9d4010d7a0 2026-03-09T20:52:12.263 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.262+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7f9d30004210 con 0x7f9d4010d7a0 2026-03-09T20:52:12.263 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.262+0000 7f9d4807c640 1 -- 192.168.123.105:0/390840208 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9d10005180 con 0x7f9d4010d7a0 2026-03-09T20:52:12.266 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.263+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(759..759 src has 1..759) ==== 7406+0+0 (unknown 3521205372 0 0) 0x7f9d300935c0 con 0x7f9d4010d7a0 2026-03-09T20:52:12.266 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.265+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f9d30061aa0 con 0x7f9d4010d7a0 2026-03-09T20:52:12.361 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:12.360+0000 7f9d4807c640 1 -- 192.168.123.105:0/390840208 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool create", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pg_num": 12} v 0) -- 0x7f9d10005470 con 0x7f9d4010d7a0 2026-03-09T20:52:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:12 vm05.local ceph-mon[51870]: pgmap v1644: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T20:52:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:12 vm05.local ceph-mon[51870]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:52:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:12 vm05.local ceph-mon[51870]: osdmap e758: 8 total, 8 up, 8 in 2026-03-09T20:52:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:12 vm05.local ceph-mon[61345]: pgmap v1644: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T20:52:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:12 vm05.local ceph-mon[61345]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:52:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:12 vm05.local ceph-mon[61345]: osdmap e758: 8 total, 8 up, 8 in 2026-03-09T20:52:12.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:12 vm09.local ceph-mon[54524]: pgmap v1644: 196 pgs: 196 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T20:52:12.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:12 vm09.local ceph-mon[54524]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:52:12.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:12 vm09.local ceph-mon[54524]: osdmap e758: 8 total, 8 up, 8 in 2026-03-09T20:52:13.097 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.097+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pg_num": 12}]=0 pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' created v760) ==== 176+0+0 (unknown 3013739240 0 0) 0x7f9d300669e0 con 0x7f9d4010d7a0 2026-03-09T20:52:13.156 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.155+0000 7f9d4807c640 1 -- 192.168.123.105:0/390840208 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool create", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pg_num": 12} v 0) -- 0x7f9d10002980 con 0x7f9d4010d7a0 2026-03-09T20:52:13.156 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.155+0000 7f9d36ffd640 1 -- 192.168.123.105:0/390840208 <== mon.0 v1:192.168.123.105:6789/0 11 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pg_num": 12}]=0 pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' already exists v760) ==== 183+0+0 (unknown 754058581 0 0) 0x7f9d10002980 con 0x7f9d4010d7a0 2026-03-09T20:52:13.156 INFO:tasks.workunit.client.0.vm05.stderr:pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' already exists 2026-03-09T20:52:13.158 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.158+0000 7f9d4807c640 1 -- 192.168.123.105:0/390840208 >> v1:192.168.123.105:6800/1903060503 conn(0x7f9d18078100 legacy=0x7f9d1807a5c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:13.158 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.158+0000 7f9d4807c640 1 -- 192.168.123.105:0/390840208 >> v1:192.168.123.105:6789/0 conn(0x7f9d4010d7a0 legacy=0x7f9d4010e7b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:13.158 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.158+0000 7f9d4807c640 1 -- 192.168.123.105:0/390840208 shutdown_connections 2026-03-09T20:52:13.159 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.158+0000 7f9d4807c640 1 -- 192.168.123.105:0/390840208 >> 192.168.123.105:0/390840208 conn(0x7f9d401005f0 msgr2=0x7f9d401147f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:13.159 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.158+0000 7f9d4807c640 1 -- 192.168.123.105:0/390840208 shutdown_connections 2026-03-09T20:52:13.159 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.158+0000 7f9d4807c640 1 -- 192.168.123.105:0/390840208 wait complete. 2026-03-09T20:52:13.166 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 27ffa175-ba53-4b7b-afd8-5d830c8341ae max_objects 10 2026-03-09T20:52:13.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.219+0000 7f6cab2cd640 1 -- 192.168.123.105:0/1470613778 >> v1:192.168.123.105:6789/0 conn(0x7f6ca4078110 legacy=0x7f6ca4114260 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:13.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.219+0000 7f6cab2cd640 1 -- 192.168.123.105:0/1470613778 shutdown_connections 2026-03-09T20:52:13.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.219+0000 7f6cab2cd640 1 -- 192.168.123.105:0/1470613778 >> 192.168.123.105:0/1470613778 conn(0x7f6ca41005f0 msgr2=0x7f6ca4102a10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:13.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.219+0000 7f6cab2cd640 1 -- 192.168.123.105:0/1470613778 shutdown_connections 2026-03-09T20:52:13.221 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.220+0000 7f6cab2cd640 1 -- 192.168.123.105:0/1470613778 wait complete. 2026-03-09T20:52:13.221 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.220+0000 7f6cab2cd640 1 Processor -- start 2026-03-09T20:52:13.221 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.220+0000 7f6cab2cd640 1 -- start start 2026-03-09T20:52:13.221 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.220+0000 7f6cab2cd640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6ca4115770 con 0x7f6ca4077620 2026-03-09T20:52:13.221 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.220+0000 7f6cab2cd640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6ca41b12e0 con 0x7f6ca4115980 2026-03-09T20:52:13.221 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.220+0000 7f6cab2cd640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6ca41b24c0 con 0x7f6ca4078110 2026-03-09T20:52:13.221 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.220+0000 7f6ca8841640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f6ca4078110 0x7f6ca4112c40 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:56674/0 (socket says 192.168.123.105:56674) 2026-03-09T20:52:13.221 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.220+0000 7f6ca8841640 1 -- 192.168.123.105:0/2755928272 learned_addr learned my addr 192.168.123.105:0/2755928272 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:52:13.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.220+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2224439232 0 0) 0x7f6ca41b24c0 con 0x7f6ca4078110 2026-03-09T20:52:13.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.221+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6c80003620 con 0x7f6ca4078110 2026-03-09T20:52:13.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.221+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2277555583 0 0) 0x7f6ca41b12e0 con 0x7f6ca4115980 2026-03-09T20:52:13.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.221+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6ca41b24c0 con 0x7f6ca4115980 2026-03-09T20:52:13.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.221+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2495589605 0 0) 0x7f6c80003620 con 0x7f6ca4078110 2026-03-09T20:52:13.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.221+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6ca41b12e0 con 0x7f6ca4078110 2026-03-09T20:52:13.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.221+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6c98004460 con 0x7f6ca4078110 2026-03-09T20:52:13.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.221+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 8261400 0 0) 0x7f6ca41b12e0 con 0x7f6ca4078110 2026-03-09T20:52:13.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.221+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 >> v1:192.168.123.109:6789/0 conn(0x7f6ca4115980 legacy=0x7f6ca41aebb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:13.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.221+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 >> v1:192.168.123.105:6789/0 conn(0x7f6ca4077620 legacy=0x7f6ca4112530 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T20:52:13.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.221+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6ca41b36c0 con 0x7f6ca4078110 2026-03-09T20:52:13.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.221+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f6c980032d0 con 0x7f6ca4078110 2026-03-09T20:52:13.223 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.222+0000 7f6cab2cd640 1 -- 192.168.123.105:0/2755928272 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f6ca41b1510 con 0x7f6ca4078110 2026-03-09T20:52:13.223 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.222+0000 7f6cab2cd640 1 -- 192.168.123.105:0/2755928272 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f6ca41b1ad0 con 0x7f6ca4078110 2026-03-09T20:52:13.224 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.222+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6c980038e0 con 0x7f6ca4078110 2026-03-09T20:52:13.224 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.224+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7f6c98003b00 con 0x7f6ca4078110 2026-03-09T20:52:13.228 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.224+0000 7f6cab2cd640 1 -- 192.168.123.105:0/2755928272 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6c6c005180 con 0x7f6ca4078110 2026-03-09T20:52:13.228 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.224+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(760..760 src has 1..760) ==== 7781+0+0 (unknown 3099004898 0 0) 0x7f6c980958f0 con 0x7f6ca4078110 2026-03-09T20:52:13.228 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.227+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f6c98061f30 con 0x7f6ca4078110 2026-03-09T20:52:13.321 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:13.320+0000 7f6cab2cd640 1 -- 192.168.123.105:0/2755928272 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"} v 0) -- 0x7f6c6c005470 con 0x7f6ca4078110 2026-03-09T20:52:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:13 vm05.local ceph-mon[51870]: osdmap e759: 8 total, 8 up, 8 in 2026-03-09T20:52:13.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:13 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/390840208' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pg_num": 12}]: dispatch 2026-03-09T20:52:13.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:13 vm05.local ceph-mon[61345]: osdmap e759: 8 total, 8 up, 8 in 2026-03-09T20:52:13.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:13 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/390840208' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pg_num": 12}]: dispatch 2026-03-09T20:52:13.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:13 vm09.local ceph-mon[54524]: osdmap e759: 8 total, 8 up, 8 in 2026-03-09T20:52:13.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:13 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/390840208' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pg_num": 12}]: dispatch 2026-03-09T20:52:14.149 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:14.148+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v761) ==== 223+0+0 (unknown 2664503136 0 0) 0x7f6c98066e70 con 0x7f6ca4078110 2026-03-09T20:52:14.205 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:14.204+0000 7f6cab2cd640 1 -- 192.168.123.105:0/2755928272 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"} v 0) -- 0x7f6c6c005d40 con 0x7f6ca4078110 2026-03-09T20:52:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:14 vm05.local ceph-mon[51870]: pgmap v1647: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:14 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/390840208' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pg_num": 12}]': finished 2026-03-09T20:52:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:14 vm05.local ceph-mon[51870]: osdmap e760: 8 total, 8 up, 8 in 2026-03-09T20:52:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:14 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/390840208' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pg_num": 12}]: dispatch 2026-03-09T20:52:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:14 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2755928272' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:52:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:14 vm05.local ceph-mon[51870]: from='client.49619 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:52:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:14 vm05.local ceph-mon[61345]: pgmap v1647: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:14 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/390840208' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pg_num": 12}]': finished 2026-03-09T20:52:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:14 vm05.local ceph-mon[61345]: osdmap e760: 8 total, 8 up, 8 in 2026-03-09T20:52:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:14 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/390840208' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pg_num": 12}]: dispatch 2026-03-09T20:52:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:14 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2755928272' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:52:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:14 vm05.local ceph-mon[61345]: from='client.49619 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:52:14.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:14 vm09.local ceph-mon[54524]: pgmap v1647: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:14.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:14 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/390840208' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pg_num": 12}]': finished 2026-03-09T20:52:14.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:14 vm09.local ceph-mon[54524]: osdmap e760: 8 total, 8 up, 8 in 2026-03-09T20:52:14.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:14 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/390840208' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pg_num": 12}]: dispatch 2026-03-09T20:52:14.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:14 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2755928272' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:52:14.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:14 vm09.local ceph-mon[54524]: from='client.49619 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:52:15.193 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.192+0000 7f6c927fc640 1 -- 192.168.123.105:0/2755928272 <== mon.2 v1:192.168.123.105:6790/0 11 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v762) ==== 223+0+0 (unknown 1159723248 0 0) 0x7f6c98059e90 con 0x7f6ca4078110 2026-03-09T20:52:15.193 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_objects = 10 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae 2026-03-09T20:52:15.195 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.194+0000 7f6cab2cd640 1 -- 192.168.123.105:0/2755928272 >> v1:192.168.123.105:6800/1903060503 conn(0x7f6c800784b0 legacy=0x7f6c8007a970 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:15.195 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.194+0000 7f6cab2cd640 1 -- 192.168.123.105:0/2755928272 >> v1:192.168.123.105:6790/0 conn(0x7f6ca4078110 legacy=0x7f6ca4112c40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:15.195 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.195+0000 7f6cab2cd640 1 -- 192.168.123.105:0/2755928272 shutdown_connections 2026-03-09T20:52:15.195 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.195+0000 7f6cab2cd640 1 -- 192.168.123.105:0/2755928272 >> 192.168.123.105:0/2755928272 conn(0x7f6ca41005f0 msgr2=0x7f6ca407ab60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:15.195 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.195+0000 7f6cab2cd640 1 -- 192.168.123.105:0/2755928272 shutdown_connections 2026-03-09T20:52:15.195 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.195+0000 7f6cab2cd640 1 -- 192.168.123.105:0/2755928272 wait complete. 2026-03-09T20:52:15.203 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool application enable 27ffa175-ba53-4b7b-afd8-5d830c8341ae rados 2026-03-09T20:52:15.256 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.255+0000 7f3959d7e640 1 -- 192.168.123.105:0/4190671612 >> v1:192.168.123.105:6790/0 conn(0x7f395410b540 legacy=0x7f395410d930 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:15.256 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.255+0000 7f3959d7e640 1 -- 192.168.123.105:0/4190671612 shutdown_connections 2026-03-09T20:52:15.256 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.255+0000 7f3959d7e640 1 -- 192.168.123.105:0/4190671612 >> 192.168.123.105:0/4190671612 conn(0x7f39540fe3b0 msgr2=0x7f39541007d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:15.256 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.255+0000 7f3959d7e640 1 -- 192.168.123.105:0/4190671612 shutdown_connections 2026-03-09T20:52:15.256 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.256+0000 7f3959d7e640 1 -- 192.168.123.105:0/4190671612 wait complete. 2026-03-09T20:52:15.257 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.256+0000 7f3959d7e640 1 Processor -- start 2026-03-09T20:52:15.257 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.256+0000 7f3959d7e640 1 -- start start 2026-03-09T20:52:15.257 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.256+0000 7f3959d7e640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f39541a1260 con 0x7f395410f110 2026-03-09T20:52:15.257 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.256+0000 7f3959d7e640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f39541a1430 con 0x7f39541086a0 2026-03-09T20:52:15.257 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.256+0000 7f3959d7e640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f39541a1600 con 0x7f395410b540 2026-03-09T20:52:15.257 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.256+0000 7f3952ffd640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f395410b540 0x7f39541a0440 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:56688/0 (socket says 192.168.123.105:56688) 2026-03-09T20:52:15.257 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.256+0000 7f3952ffd640 1 -- 192.168.123.105:0/713014630 learned_addr learned my addr 192.168.123.105:0/713014630 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:52:15.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.257+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1148473250 0 0) 0x7f39541a1600 con 0x7f395410b540 2026-03-09T20:52:15.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.257+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3924003620 con 0x7f395410b540 2026-03-09T20:52:15.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.257+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 772167364 0 0) 0x7f39541a1260 con 0x7f395410f110 2026-03-09T20:52:15.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.257+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f39541a1600 con 0x7f395410f110 2026-03-09T20:52:15.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.257+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1392760154 0 0) 0x7f3924003620 con 0x7f395410b540 2026-03-09T20:52:15.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.257+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f39541a1260 con 0x7f395410b540 2026-03-09T20:52:15.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.257+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1269623321 0 0) 0x7f39541a1600 con 0x7f395410f110 2026-03-09T20:52:15.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.257+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f3924003620 con 0x7f395410f110 2026-03-09T20:52:15.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.258+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f3940004530 con 0x7f395410b540 2026-03-09T20:52:15.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.258+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f3948003250 con 0x7f395410f110 2026-03-09T20:52:15.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.258+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 4106742952 0 0) 0x7f39541a1260 con 0x7f395410b540 2026-03-09T20:52:15.258 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.258+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 >> v1:192.168.123.109:6789/0 conn(0x7f39541086a0 legacy=0x7f3954107530 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:15.259 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.258+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 >> v1:192.168.123.105:6789/0 conn(0x7f395410f110 legacy=0x7f39541a0b50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:15.259 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.258+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f39541b3320 con 0x7f395410b540 2026-03-09T20:52:15.259 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.258+0000 7f3959d7e640 1 -- 192.168.123.105:0/713014630 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f39541b0350 con 0x7f395410b540 2026-03-09T20:52:15.259 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.259+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f3940003df0 con 0x7f395410b540 2026-03-09T20:52:15.260 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.259+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f3940005160 con 0x7f395410b540 2026-03-09T20:52:15.261 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.259+0000 7f3959d7e640 1 -- 192.168.123.105:0/713014630 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f39541b08c0 con 0x7f395410b540 2026-03-09T20:52:15.261 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.260+0000 7f3959d7e640 1 -- 192.168.123.105:0/713014630 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3954103dd0 con 0x7f395410b540 2026-03-09T20:52:15.261 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.260+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7f39400053c0 con 0x7f395410b540 2026-03-09T20:52:15.261 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.261+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(762..762 src has 1..762) ==== 7781+0+0 (unknown 1716666772 0 0) 0x7f39400049b0 con 0x7f395410b540 2026-03-09T20:52:15.264 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.263+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f3940063090 con 0x7f395410b540 2026-03-09T20:52:15.360 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:15.359+0000 7f3959d7e640 1 -- 192.168.123.105:0/713014630 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"} v 0) -- 0x7f3954113d00 con 0x7f395410b540 2026-03-09T20:52:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:15 vm09.local ceph-mon[54524]: from='client.49619 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]': finished 2026-03-09T20:52:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:15 vm09.local ceph-mon[54524]: osdmap e761: 8 total, 8 up, 8 in 2026-03-09T20:52:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:15 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2755928272' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:52:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:15 vm09.local ceph-mon[54524]: from='client.49619 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:52:15.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:15 vm09.local ceph-mon[54524]: pgmap v1650: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:52:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:15 vm05.local ceph-mon[61345]: from='client.49619 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]': finished 2026-03-09T20:52:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:15 vm05.local ceph-mon[61345]: osdmap e761: 8 total, 8 up, 8 in 2026-03-09T20:52:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:15 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2755928272' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:52:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:15 vm05.local ceph-mon[61345]: from='client.49619 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:52:15.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:15 vm05.local ceph-mon[61345]: pgmap v1650: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:52:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:15 vm05.local ceph-mon[51870]: from='client.49619 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]': finished 2026-03-09T20:52:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:15 vm05.local ceph-mon[51870]: osdmap e761: 8 total, 8 up, 8 in 2026-03-09T20:52:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:15 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2755928272' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:52:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:15 vm05.local ceph-mon[51870]: from='client.49619 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:52:15.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:15 vm05.local ceph-mon[51870]: pgmap v1650: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-09T20:52:16.200 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:16.199+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]=0 enabled application 'rados' on pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' v763) ==== 213+0+0 (unknown 717625420 0 0) 0x7f3940067fd0 con 0x7f395410b540 2026-03-09T20:52:16.252 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:16.250+0000 7f3959d7e640 1 -- 192.168.123.105:0/713014630 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"} v 0) -- 0x7f39541b0d50 con 0x7f395410b540 2026-03-09T20:52:16.440 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:16 vm09.local ceph-mon[54524]: from='client.49619 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]': finished 2026-03-09T20:52:16.440 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:16 vm09.local ceph-mon[54524]: osdmap e762: 8 total, 8 up, 8 in 2026-03-09T20:52:16.440 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:16 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/713014630' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]: dispatch 2026-03-09T20:52:16.440 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:16 vm09.local ceph-mon[54524]: from='client.49625 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]: dispatch 2026-03-09T20:52:16.440 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:16 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:16 vm05.local ceph-mon[51870]: from='client.49619 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]': finished 2026-03-09T20:52:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:16 vm05.local ceph-mon[51870]: osdmap e762: 8 total, 8 up, 8 in 2026-03-09T20:52:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:16 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/713014630' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]: dispatch 2026-03-09T20:52:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:16 vm05.local ceph-mon[51870]: from='client.49625 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]: dispatch 2026-03-09T20:52:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:16 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:16 vm05.local ceph-mon[61345]: from='client.49619 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "10"}]': finished 2026-03-09T20:52:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:16 vm05.local ceph-mon[61345]: osdmap e762: 8 total, 8 up, 8 in 2026-03-09T20:52:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:16 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/713014630' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]: dispatch 2026-03-09T20:52:16.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:16 vm05.local ceph-mon[61345]: from='client.49625 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]: dispatch 2026-03-09T20:52:16.661 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:16 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:17.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:52:16 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:52:17.208 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:17.207+0000 7f3950ff9640 1 -- 192.168.123.105:0/713014630 <== mon.2 v1:192.168.123.105:6790/0 11 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]=0 enabled application 'rados' on pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' v764) ==== 213+0+0 (unknown 2351159550 0 0) 0x7f394005aff0 con 0x7f395410b540 2026-03-09T20:52:17.208 INFO:tasks.workunit.client.0.vm05.stderr:enabled application 'rados' on pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' 2026-03-09T20:52:17.210 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:17.209+0000 7f3959d7e640 1 -- 192.168.123.105:0/713014630 >> v1:192.168.123.105:6800/1903060503 conn(0x7f3924078410 legacy=0x7f392407a8d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.210 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:17.209+0000 7f3959d7e640 1 -- 192.168.123.105:0/713014630 >> v1:192.168.123.105:6790/0 conn(0x7f395410b540 legacy=0x7f39541a0440 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:17.210 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:17.210+0000 7f3959d7e640 1 -- 192.168.123.105:0/713014630 shutdown_connections 2026-03-09T20:52:17.210 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:17.210+0000 7f3959d7e640 1 -- 192.168.123.105:0/713014630 >> 192.168.123.105:0/713014630 conn(0x7f39540fe3b0 msgr2=0x7f39541125a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:17.210 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:17.210+0000 7f3959d7e640 1 -- 192.168.123.105:0/713014630 shutdown_connections 2026-03-09T20:52:17.210 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:17.210+0000 7f3959d7e640 1 -- 192.168.123.105:0/713014630 wait complete. 2026-03-09T20:52:17.218 INFO:tasks.workunit.client.0.vm05.stderr:++ seq 1 10 2026-03-09T20:52:17.220 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:52:17.220 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put obj1 /etc/passwd 2026-03-09T20:52:17.256 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:52:17.256 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put obj2 /etc/passwd 2026-03-09T20:52:17.284 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:52:17.284 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put obj3 /etc/passwd 2026-03-09T20:52:17.317 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:52:17.317 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put obj4 /etc/passwd 2026-03-09T20:52:17.345 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:52:17.345 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put obj5 /etc/passwd 2026-03-09T20:52:17.374 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:52:17.375 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put obj6 /etc/passwd 2026-03-09T20:52:17.402 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:52:17.402 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put obj7 /etc/passwd 2026-03-09T20:52:17.429 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:52:17.429 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put obj8 /etc/passwd 2026-03-09T20:52:17.456 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:52:17.456 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put obj9 /etc/passwd 2026-03-09T20:52:17.483 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:52:17.483 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put obj10 /etc/passwd 2026-03-09T20:52:17.514 INFO:tasks.workunit.client.0.vm05.stderr:+ sleep 30 2026-03-09T20:52:17.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:17 vm09.local ceph-mon[54524]: from='client.49625 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]': finished 2026-03-09T20:52:17.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:17 vm09.local ceph-mon[54524]: osdmap e763: 8 total, 8 up, 8 in 2026-03-09T20:52:17.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:17 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/713014630' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]: dispatch 2026-03-09T20:52:17.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:17 vm09.local ceph-mon[54524]: from='client.49625 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]: dispatch 2026-03-09T20:52:17.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:17 vm09.local ceph-mon[54524]: pgmap v1653: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:17.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:17 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:17 vm05.local ceph-mon[51870]: from='client.49625 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]': finished 2026-03-09T20:52:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:17 vm05.local ceph-mon[51870]: osdmap e763: 8 total, 8 up, 8 in 2026-03-09T20:52:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:17 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/713014630' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]: dispatch 2026-03-09T20:52:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:17 vm05.local ceph-mon[51870]: from='client.49625 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]: dispatch 2026-03-09T20:52:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:17 vm05.local ceph-mon[51870]: pgmap v1653: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:17 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:17 vm05.local ceph-mon[61345]: from='client.49625 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]': finished 2026-03-09T20:52:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:17 vm05.local ceph-mon[61345]: osdmap e763: 8 total, 8 up, 8 in 2026-03-09T20:52:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:17 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/713014630' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]: dispatch 2026-03-09T20:52:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:17 vm05.local ceph-mon[61345]: from='client.49625 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]: dispatch 2026-03-09T20:52:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:17 vm05.local ceph-mon[61345]: pgmap v1653: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:17 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:18.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:18 vm09.local ceph-mon[54524]: from='client.49625 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]': finished 2026-03-09T20:52:18.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:18 vm09.local ceph-mon[54524]: osdmap e764: 8 total, 8 up, 8 in 2026-03-09T20:52:18.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:18 vm09.local ceph-mon[54524]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:52:18.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:18 vm05.local ceph-mon[61345]: from='client.49625 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]': finished 2026-03-09T20:52:18.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:18 vm05.local ceph-mon[61345]: osdmap e764: 8 total, 8 up, 8 in 2026-03-09T20:52:18.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:18 vm05.local ceph-mon[61345]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:52:18.633 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:18 vm05.local ceph-mon[51870]: from='client.49625 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "app": "rados"}]': finished 2026-03-09T20:52:18.633 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:18 vm05.local ceph-mon[51870]: osdmap e764: 8 total, 8 up, 8 in 2026-03-09T20:52:18.633 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:18 vm05.local ceph-mon[51870]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T20:52:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:52:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:52:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:52:19.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:19 vm09.local ceph-mon[54524]: pgmap v1655: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T20:52:19.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:19 vm05.local ceph-mon[61345]: pgmap v1655: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T20:52:19.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:19 vm05.local ceph-mon[51870]: pgmap v1655: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T20:52:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:21 vm09.local ceph-mon[54524]: pgmap v1656: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 3.3 KiB/s wr, 3 op/s 2026-03-09T20:52:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:21 vm09.local ceph-mon[54524]: pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' is full (reached quota's max_objects: 10) 2026-03-09T20:52:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:21 vm09.local ceph-mon[54524]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T20:52:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:21 vm09.local ceph-mon[54524]: osdmap e765: 8 total, 8 up, 8 in 2026-03-09T20:52:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:21 vm05.local ceph-mon[61345]: pgmap v1656: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 3.3 KiB/s wr, 3 op/s 2026-03-09T20:52:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:21 vm05.local ceph-mon[61345]: pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' is full (reached quota's max_objects: 10) 2026-03-09T20:52:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:21 vm05.local ceph-mon[61345]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T20:52:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:21 vm05.local ceph-mon[61345]: osdmap e765: 8 total, 8 up, 8 in 2026-03-09T20:52:22.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:21 vm05.local ceph-mon[51870]: pgmap v1656: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 3.3 KiB/s wr, 3 op/s 2026-03-09T20:52:22.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:21 vm05.local ceph-mon[51870]: pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' is full (reached quota's max_objects: 10) 2026-03-09T20:52:22.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:21 vm05.local ceph-mon[51870]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T20:52:22.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:21 vm05.local ceph-mon[51870]: osdmap e765: 8 total, 8 up, 8 in 2026-03-09T20:52:24.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:23 vm09.local ceph-mon[54524]: pgmap v1658: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 791 B/s rd, 3.1 KiB/s wr, 2 op/s 2026-03-09T20:52:24.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:23 vm05.local ceph-mon[61345]: pgmap v1658: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 791 B/s rd, 3.1 KiB/s wr, 2 op/s 2026-03-09T20:52:24.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:23 vm05.local ceph-mon[51870]: pgmap v1658: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 791 B/s rd, 3.1 KiB/s wr, 2 op/s 2026-03-09T20:52:26.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:25 vm09.local ceph-mon[54524]: pgmap v1659: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-09T20:52:26.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:25 vm05.local ceph-mon[61345]: pgmap v1659: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-09T20:52:26.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:25 vm05.local ceph-mon[51870]: pgmap v1659: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-09T20:52:27.187 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:52:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:52:27.187 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:52:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=infra.usagestats t=2026-03-09T20:52:26.814402168Z level=info msg="Usage stats are ready to report" 2026-03-09T20:52:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:27 vm09.local ceph-mon[54524]: pgmap v1660: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 2.1 KiB/s wr, 2 op/s 2026-03-09T20:52:28.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:27 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:27 vm05.local ceph-mon[61345]: pgmap v1660: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 2.1 KiB/s wr, 2 op/s 2026-03-09T20:52:28.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:27 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:27 vm05.local ceph-mon[51870]: pgmap v1660: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 2.1 KiB/s wr, 2 op/s 2026-03-09T20:52:28.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:27 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:52:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:52:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:52:30.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:29 vm09.local ceph-mon[54524]: pgmap v1661: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-09T20:52:30.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:29 vm05.local ceph-mon[61345]: pgmap v1661: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-09T20:52:30.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:29 vm05.local ceph-mon[51870]: pgmap v1661: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-09T20:52:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:31 vm05.local ceph-mon[61345]: pgmap v1662: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:52:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:31 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:52:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:31 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:31 vm05.local ceph-mon[51870]: pgmap v1662: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:52:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:31 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:52:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:31 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:31 vm09.local ceph-mon[54524]: pgmap v1662: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:52:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:31 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:52:32.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:31 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:34.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:33 vm05.local ceph-mon[61345]: pgmap v1663: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 914 B/s rd, 0 op/s 2026-03-09T20:52:34.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:33 vm05.local ceph-mon[51870]: pgmap v1663: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 914 B/s rd, 0 op/s 2026-03-09T20:52:34.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:33 vm09.local ceph-mon[54524]: pgmap v1663: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 914 B/s rd, 0 op/s 2026-03-09T20:52:36.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:35 vm09.local ceph-mon[54524]: pgmap v1664: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:52:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:35 vm05.local ceph-mon[61345]: pgmap v1664: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:52:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:35 vm05.local ceph-mon[51870]: pgmap v1664: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:52:37.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:52:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:52:38.212 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:37 vm05.local ceph-mon[61345]: pgmap v1665: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:38.212 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:37 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:38.212 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:37 vm05.local ceph-mon[51870]: pgmap v1665: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:38.212 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:37 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:37 vm09.local ceph-mon[54524]: pgmap v1665: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:38.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:37 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:52:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:52:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:52:40.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:39 vm09.local ceph-mon[54524]: pgmap v1666: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:52:40.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:39 vm05.local ceph-mon[61345]: pgmap v1666: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:52:40.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:39 vm05.local ceph-mon[51870]: pgmap v1666: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:52:42.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:41 vm09.local ceph-mon[54524]: pgmap v1667: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:42.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:41 vm05.local ceph-mon[61345]: pgmap v1667: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:42.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:41 vm05.local ceph-mon[51870]: pgmap v1667: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:44.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:43 vm09.local ceph-mon[54524]: pgmap v1668: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:52:44.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:43 vm05.local ceph-mon[61345]: pgmap v1668: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:52:44.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:43 vm05.local ceph-mon[51870]: pgmap v1668: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:52:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:46 vm05.local ceph-mon[61345]: pgmap v1669: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:52:46.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:46 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:46 vm05.local ceph-mon[51870]: pgmap v1669: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:52:46.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:46 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:46 vm09.local ceph-mon[54524]: pgmap v1669: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:52:46.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:46 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:52:47.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:52:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:52:47.515 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=168721 2026-03-09T20:52:47.515 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 27ffa175-ba53-4b7b-afd8-5d830c8341ae max_objects 100 2026-03-09T20:52:47.515 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put onemore /etc/passwd 2026-03-09T20:52:47.572 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.572+0000 7f3cf61ce640 1 -- 192.168.123.105:0/798917518 >> v1:192.168.123.105:6789/0 conn(0x7f3cf0075b20 legacy=0x7f3cf0114470 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:47.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.572+0000 7f3cf61ce640 1 -- 192.168.123.105:0/798917518 shutdown_connections 2026-03-09T20:52:47.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.572+0000 7f3cf61ce640 1 -- 192.168.123.105:0/798917518 >> 192.168.123.105:0/798917518 conn(0x7f3cf00fe3b0 msgr2=0x7f3cf01007d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:47.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.572+0000 7f3cf61ce640 1 -- 192.168.123.105:0/798917518 shutdown_connections 2026-03-09T20:52:47.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.572+0000 7f3cf61ce640 1 -- 192.168.123.105:0/798917518 wait complete. 2026-03-09T20:52:47.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.572+0000 7f3cf61ce640 1 Processor -- start 2026-03-09T20:52:47.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.572+0000 7f3cf61ce640 1 -- start start 2026-03-09T20:52:47.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.572+0000 7f3cf61ce640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3cf01158b0 con 0x7f3cf0075b20 2026-03-09T20:52:47.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.573+0000 7f3cf61ce640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3cf01b1310 con 0x7f3cf01065f0 2026-03-09T20:52:47.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.573+0000 7f3cf61ce640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3cf01b24f0 con 0x7f3cf0115b90 2026-03-09T20:52:47.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.573+0000 7f3ceeffd640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f3cf01065f0 0x7f3cf0112d80 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:50234/0 (socket says 192.168.123.105:50234) 2026-03-09T20:52:47.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.573+0000 7f3ceeffd640 1 -- 192.168.123.105:0/3951457830 learned_addr learned my addr 192.168.123.105:0/3951457830 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.573+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 731580656 0 0) 0x7f3cf01b1310 con 0x7f3cf01065f0 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.573+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3cbc003620 con 0x7f3cf01065f0 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.573+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1534691947 0 0) 0x7f3cf01b24f0 con 0x7f3cf0115b90 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.573+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3cf01b1310 con 0x7f3cf0115b90 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.573+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4055854260 0 0) 0x7f3cf01158b0 con 0x7f3cf0075b20 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.573+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3cf01b24f0 con 0x7f3cf0075b20 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.573+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3263257355 0 0) 0x7f3cbc003620 con 0x7f3cf01065f0 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.573+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f3cf01158b0 con 0x7f3cf01065f0 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.573+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f3cdc004480 con 0x7f3cf01065f0 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.574+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1208288414 0 0) 0x7f3cf01158b0 con 0x7f3cf01065f0 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.574+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 >> v1:192.168.123.105:6790/0 conn(0x7f3cf0115b90 legacy=0x7f3cf01aebe0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.574+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 >> v1:192.168.123.105:6789/0 conn(0x7f3cf0075b20 legacy=0x7f3cf0112670 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.574+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3cf01b36d0 con 0x7f3cf01065f0 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.574+0000 7f3cf61ce640 1 -- 192.168.123.105:0/3951457830 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f3cf01b0360 con 0x7f3cf01065f0 2026-03-09T20:52:47.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.574+0000 7f3cf61ce640 1 -- 192.168.123.105:0/3951457830 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f3cf01b08d0 con 0x7f3cf01065f0 2026-03-09T20:52:47.575 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.574+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f3cdc002e70 con 0x7f3cf01065f0 2026-03-09T20:52:47.575 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.574+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f3cdc004ce0 con 0x7f3cf01065f0 2026-03-09T20:52:47.575 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.575+0000 7f3cd27fc640 1 -- 192.168.123.105:0/3951457830 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3cf0076b50 con 0x7f3cf01065f0 2026-03-09T20:52:47.576 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.575+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7f3cdc004ce0 con 0x7f3cf01065f0 2026-03-09T20:52:47.576 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.576+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(765..765 src has 1..765) ==== 7794+0+0 (unknown 2469263347 0 0) 0x7f3cdc095610 con 0x7f3cf01065f0 2026-03-09T20:52:47.577 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.576+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=766}) -- 0x7f3cf01158b0 con 0x7f3cf01065f0 2026-03-09T20:52:47.579 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.579+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f3cdc061c40 con 0x7f3cf01065f0 2026-03-09T20:52:47.676 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:47.675+0000 7f3cd27fc640 1 -- 192.168.123.105:0/3951457830 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"} v 0) -- 0x7f3cf011c800 con 0x7f3cf01065f0 2026-03-09T20:52:48.120 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:48.119+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.1 v1:192.168.123.109:6789/0 10 ==== osd_map(766..766 src has 1..766) ==== 628+0+0 (unknown 2359377070 0 0) 0x7f3cdc059ba0 con 0x7f3cf01065f0 2026-03-09T20:52:48.120 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:48.119+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=767}) -- 0x7f3cf01b24f0 con 0x7f3cf01065f0 2026-03-09T20:52:48.127 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:48.126+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.1 v1:192.168.123.109:6789/0 11 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]=0 set-quota max_objects = 100 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v766) ==== 225+0+0 (unknown 3152412983 0 0) 0x7f3cdc066b80 con 0x7f3cf01065f0 2026-03-09T20:52:48.179 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:48.179+0000 7f3cd27fc640 1 -- 192.168.123.105:0/3951457830 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"} v 0) -- 0x7f3cf01b34b0 con 0x7f3cf01065f0 2026-03-09T20:52:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:48 vm05.local ceph-mon[61345]: pgmap v1670: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:48 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:48 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3951457830' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T20:52:48.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:48 vm05.local ceph-mon[61345]: from='client.50476 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T20:52:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:48 vm05.local ceph-mon[51870]: pgmap v1670: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:48 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:48 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3951457830' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T20:52:48.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:48 vm05.local ceph-mon[51870]: from='client.50476 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T20:52:48.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:48 vm09.local ceph-mon[54524]: pgmap v1670: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:48.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:48 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:48.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:48 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3951457830' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T20:52:48.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:48 vm09.local ceph-mon[54524]: from='client.50476 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T20:52:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:52:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:52:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:52:49.127 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:49.126+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.1 v1:192.168.123.109:6789/0 12 ==== osd_map(767..767 src has 1..767) ==== 628+0+0 (unknown 4161312032 0 0) 0x7f3cdc003440 con 0x7f3cf01065f0 2026-03-09T20:52:49.127 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:49.126+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=768}) -- 0x7f3cf01b1310 con 0x7f3cf01065f0 2026-03-09T20:52:49.133 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:49.132+0000 7f3cecff9640 1 -- 192.168.123.105:0/3951457830 <== mon.1 v1:192.168.123.109:6789/0 13 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]=0 set-quota max_objects = 100 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v767) ==== 225+0+0 (unknown 2466594127 0 0) 0x7f3cdc0592a0 con 0x7f3cf01065f0 2026-03-09T20:52:49.133 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_objects = 100 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae 2026-03-09T20:52:49.136 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:49.135+0000 7f3cd27fc640 1 -- 192.168.123.105:0/3951457830 >> v1:192.168.123.105:6800/1903060503 conn(0x7f3cbc077eb0 legacy=0x7f3cbc07a370 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:49.136 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:49.135+0000 7f3cd27fc640 1 -- 192.168.123.105:0/3951457830 >> v1:192.168.123.109:6789/0 conn(0x7f3cf01065f0 legacy=0x7f3cf0112d80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:49.136 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:49.135+0000 7f3cd27fc640 1 -- 192.168.123.105:0/3951457830 shutdown_connections 2026-03-09T20:52:49.136 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:49.135+0000 7f3cd27fc640 1 -- 192.168.123.105:0/3951457830 >> 192.168.123.105:0/3951457830 conn(0x7f3cf00fe3b0 msgr2=0x7f3cf00ff260 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:49.136 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:49.135+0000 7f3cd27fc640 1 -- 192.168.123.105:0/3951457830 shutdown_connections 2026-03-09T20:52:49.136 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:49.135+0000 7f3cd27fc640 1 -- 192.168.123.105:0/3951457830 wait complete. 2026-03-09T20:52:49.144 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 168721 2026-03-09T20:52:49.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:49 vm05.local ceph-mon[61345]: from='client.50476 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]': finished 2026-03-09T20:52:49.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:49 vm05.local ceph-mon[61345]: osdmap e766: 8 total, 8 up, 8 in 2026-03-09T20:52:49.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:49 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3951457830' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T20:52:49.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:49 vm05.local ceph-mon[61345]: from='client.50476 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T20:52:49.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:49 vm05.local ceph-mon[61345]: pgmap v1672: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:52:49.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:49 vm05.local ceph-mon[51870]: from='client.50476 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]': finished 2026-03-09T20:52:49.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:49 vm05.local ceph-mon[51870]: osdmap e766: 8 total, 8 up, 8 in 2026-03-09T20:52:49.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:49 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3951457830' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T20:52:49.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:49 vm05.local ceph-mon[51870]: from='client.50476 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T20:52:49.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:49 vm05.local ceph-mon[51870]: pgmap v1672: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:52:49.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:49 vm09.local ceph-mon[54524]: from='client.50476 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]': finished 2026-03-09T20:52:49.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:49 vm09.local ceph-mon[54524]: osdmap e766: 8 total, 8 up, 8 in 2026-03-09T20:52:49.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:49 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3951457830' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T20:52:49.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:49 vm09.local ceph-mon[54524]: from='client.50476 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]: dispatch 2026-03-09T20:52:49.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:49 vm09.local ceph-mon[54524]: pgmap v1672: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:52:50.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:50 vm05.local ceph-mon[61345]: from='client.50476 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]': finished 2026-03-09T20:52:50.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:50 vm05.local ceph-mon[61345]: osdmap e767: 8 total, 8 up, 8 in 2026-03-09T20:52:50.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:50 vm05.local ceph-mon[51870]: from='client.50476 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]': finished 2026-03-09T20:52:50.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:50 vm05.local ceph-mon[51870]: osdmap e767: 8 total, 8 up, 8 in 2026-03-09T20:52:50.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:50 vm09.local ceph-mon[54524]: from='client.50476 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "100"}]': finished 2026-03-09T20:52:50.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:50 vm09.local ceph-mon[54524]: osdmap e767: 8 total, 8 up, 8 in 2026-03-09T20:52:51.365 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 0 -ne 0 ']' 2026-03-09T20:52:51.365 INFO:tasks.workunit.client.0.vm05.stderr:+ true 2026-03-09T20:52:51.365 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put twomore /etc/passwd 2026-03-09T20:52:51.395 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 27ffa175-ba53-4b7b-afd8-5d830c8341ae max_bytes 100 2026-03-09T20:52:51.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:51 vm05.local ceph-mon[51870]: pgmap v1674: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:51.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:51 vm05.local ceph-mon[61345]: pgmap v1674: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:51.449 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.448+0000 7f4e1e22b640 1 -- 192.168.123.105:0/2780431794 >> v1:192.168.123.105:6789/0 conn(0x7f4e18108680 legacy=0x7f4e18108a60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:51.449 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.449+0000 7f4e1e22b640 1 -- 192.168.123.105:0/2780431794 shutdown_connections 2026-03-09T20:52:51.449 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.449+0000 7f4e1e22b640 1 -- 192.168.123.105:0/2780431794 >> 192.168.123.105:0/2780431794 conn(0x7f4e180fe3b0 msgr2=0x7f4e181007d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:51.449 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.449+0000 7f4e1e22b640 1 -- 192.168.123.105:0/2780431794 shutdown_connections 2026-03-09T20:52:51.450 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.449+0000 7f4e1e22b640 1 -- 192.168.123.105:0/2780431794 wait complete. 2026-03-09T20:52:51.450 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.449+0000 7f4e1e22b640 1 Processor -- start 2026-03-09T20:52:51.450 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.449+0000 7f4e1e22b640 1 -- start start 2026-03-09T20:52:51.450 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.449+0000 7f4e1e22b640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4e181ab900 con 0x7f4e18108680 2026-03-09T20:52:51.450 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.449+0000 7f4e1e22b640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4e181acb00 con 0x7f4e1810f0f0 2026-03-09T20:52:51.450 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.449+0000 7f4e1e22b640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4e181add00 con 0x7f4e1810b520 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.450+0000 7f4e177fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f4e18108680 0x7f4e1810e930 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:47880/0 (socket says 192.168.123.105:47880) 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.450+0000 7f4e177fe640 1 -- 192.168.123.105:0/3955454375 learned_addr learned my addr 192.168.123.105:0/3955454375 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.450+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1974513590 0 0) 0x7f4e181ab900 con 0x7f4e18108680 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.450+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4df0003620 con 0x7f4e18108680 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.450+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2579318801 0 0) 0x7f4df0003620 con 0x7f4e18108680 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.450+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f4e181ab900 con 0x7f4e18108680 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.450+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f4dfc003280 con 0x7f4e18108680 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.450+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2893324945 0 0) 0x7f4e181add00 con 0x7f4e1810b520 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.450+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4df0003620 con 0x7f4e1810b520 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.450+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 753020199 0 0) 0x7f4e181acb00 con 0x7f4e1810f0f0 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.450+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4e181add00 con 0x7f4e1810f0f0 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.450+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2372881749 0 0) 0x7f4df0003620 con 0x7f4e1810b520 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.451+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f4e181acb00 con 0x7f4e1810b520 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.451+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1101377820 0 0) 0x7f4e181ab900 con 0x7f4e18108680 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.451+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 >> v1:192.168.123.105:6790/0 conn(0x7f4e1810b520 legacy=0x7f4e181a68b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.451+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 >> v1:192.168.123.109:6789/0 conn(0x7f4e1810f0f0 legacy=0x7f4e181aa000 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.451+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4e181aef00 con 0x7f4e18108680 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.451+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f4dfc003b60 con 0x7f4e18108680 2026-03-09T20:52:51.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.451+0000 7f4e1e22b640 1 -- 192.168.123.105:0/3955454375 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f4e181acd30 con 0x7f4e18108680 2026-03-09T20:52:51.453 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.451+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f4dfc004f20 con 0x7f4e18108680 2026-03-09T20:52:51.453 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.451+0000 7f4e1e22b640 1 -- 192.168.123.105:0/3955454375 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f4e181ad2f0 con 0x7f4e18108680 2026-03-09T20:52:51.456 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.452+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7f4dfc003710 con 0x7f4e18108680 2026-03-09T20:52:51.456 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.452+0000 7f4e1e22b640 1 -- 192.168.123.105:0/3955454375 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4ddc005180 con 0x7f4e18108680 2026-03-09T20:52:51.456 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.453+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(768..768 src has 1..768) ==== 7794+0+0 (unknown 1583111159 0 0) 0x7f4dfc0958d0 con 0x7f4e18108680 2026-03-09T20:52:51.456 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.456+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f4dfc061f00 con 0x7f4e18108680 2026-03-09T20:52:51.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:51 vm09.local ceph-mon[54524]: pgmap v1674: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:52:51.553 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:51.552+0000 7f4e1e22b640 1 -- 192.168.123.105:0/3955454375 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"} v 0) -- 0x7f4ddc005470 con 0x7f4e18108680 2026-03-09T20:52:52.351 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:52.350+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]=0 set-quota max_bytes = 100 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v769) ==== 221+0+0 (unknown 3903308546 0 0) 0x7f4dfc066e40 con 0x7f4e18108680 2026-03-09T20:52:52.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:52 vm05.local ceph-mon[51870]: pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' no longer out of quota; removing NO_QUOTA flag 2026-03-09T20:52:52.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:52 vm05.local ceph-mon[51870]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T20:52:52.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:52 vm05.local ceph-mon[51870]: osdmap e768: 8 total, 8 up, 8 in 2026-03-09T20:52:52.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:52 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3955454375' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T20:52:52.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:52 vm05.local ceph-mon[61345]: pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' no longer out of quota; removing NO_QUOTA flag 2026-03-09T20:52:52.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:52 vm05.local ceph-mon[61345]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T20:52:52.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:52 vm05.local ceph-mon[61345]: osdmap e768: 8 total, 8 up, 8 in 2026-03-09T20:52:52.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:52 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3955454375' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T20:52:52.414 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:52.413+0000 7f4e1e22b640 1 -- 192.168.123.105:0/3955454375 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"} v 0) -- 0x7f4ddc0028a0 con 0x7f4e18108680 2026-03-09T20:52:52.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:52 vm09.local ceph-mon[54524]: pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' no longer out of quota; removing NO_QUOTA flag 2026-03-09T20:52:52.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:52 vm09.local ceph-mon[54524]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T20:52:52.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:52 vm09.local ceph-mon[54524]: osdmap e768: 8 total, 8 up, 8 in 2026-03-09T20:52:52.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:52 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3955454375' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T20:52:53.360 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:53.359+0000 7f4e14ff9640 1 -- 192.168.123.105:0/3955454375 <== mon.0 v1:192.168.123.105:6789/0 11 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]=0 set-quota max_bytes = 100 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v770) ==== 221+0+0 (unknown 1840957392 0 0) 0x7f4dfc059e60 con 0x7f4e18108680 2026-03-09T20:52:53.360 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_bytes = 100 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae 2026-03-09T20:52:53.362 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:53.361+0000 7f4e1e22b640 1 -- 192.168.123.105:0/3955454375 >> v1:192.168.123.105:6800/1903060503 conn(0x7f4df00783c0 legacy=0x7f4df007a880 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:53.362 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:53.361+0000 7f4e1e22b640 1 -- 192.168.123.105:0/3955454375 >> v1:192.168.123.105:6789/0 conn(0x7f4e18108680 legacy=0x7f4e1810e930 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:52:53.362 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:53.362+0000 7f4e1e22b640 1 -- 192.168.123.105:0/3955454375 shutdown_connections 2026-03-09T20:52:53.362 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:53.362+0000 7f4e1e22b640 1 -- 192.168.123.105:0/3955454375 >> 192.168.123.105:0/3955454375 conn(0x7f4e180fe3b0 msgr2=0x7f4e18100f80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:52:53.363 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:53.362+0000 7f4e1e22b640 1 -- 192.168.123.105:0/3955454375 shutdown_connections 2026-03-09T20:52:53.363 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:52:53.362+0000 7f4e1e22b640 1 -- 192.168.123.105:0/3955454375 wait complete. 2026-03-09T20:52:53.371 INFO:tasks.workunit.client.0.vm05.stderr:+ sleep 30 2026-03-09T20:52:53.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:53 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3955454375' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T20:52:53.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:53 vm05.local ceph-mon[51870]: osdmap e769: 8 total, 8 up, 8 in 2026-03-09T20:52:53.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:53 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3955454375' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T20:52:53.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:53 vm05.local ceph-mon[51870]: pgmap v1677: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T20:52:53.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:53 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3955454375' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T20:52:53.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:53 vm05.local ceph-mon[61345]: osdmap e769: 8 total, 8 up, 8 in 2026-03-09T20:52:53.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:53 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3955454375' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T20:52:53.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:53 vm05.local ceph-mon[61345]: pgmap v1677: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T20:52:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:53 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3955454375' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T20:52:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:53 vm09.local ceph-mon[54524]: osdmap e769: 8 total, 8 up, 8 in 2026-03-09T20:52:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:53 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3955454375' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-09T20:52:53.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:53 vm09.local ceph-mon[54524]: pgmap v1677: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T20:52:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:54 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3955454375' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T20:52:54.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:54 vm05.local ceph-mon[61345]: osdmap e770: 8 total, 8 up, 8 in 2026-03-09T20:52:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:54 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3955454375' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T20:52:54.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:54 vm05.local ceph-mon[51870]: osdmap e770: 8 total, 8 up, 8 in 2026-03-09T20:52:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:54 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3955454375' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "100"}]': finished 2026-03-09T20:52:54.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:54 vm09.local ceph-mon[54524]: osdmap e770: 8 total, 8 up, 8 in 2026-03-09T20:52:55.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:55 vm05.local ceph-mon[61345]: pgmap v1679: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 924 B/s rd, 0 op/s 2026-03-09T20:52:55.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:55 vm05.local ceph-mon[51870]: pgmap v1679: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 924 B/s rd, 0 op/s 2026-03-09T20:52:55.772 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:55 vm09.local ceph-mon[54524]: pgmap v1679: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 924 B/s rd, 0 op/s 2026-03-09T20:52:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:56 vm05.local ceph-mon[61345]: pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' is full (reached quota's max_bytes: 100 B) 2026-03-09T20:52:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:56 vm05.local ceph-mon[61345]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T20:52:56.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:56 vm05.local ceph-mon[61345]: osdmap e771: 8 total, 8 up, 8 in 2026-03-09T20:52:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:56 vm05.local ceph-mon[51870]: pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' is full (reached quota's max_bytes: 100 B) 2026-03-09T20:52:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:56 vm05.local ceph-mon[51870]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T20:52:56.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:56 vm05.local ceph-mon[51870]: osdmap e771: 8 total, 8 up, 8 in 2026-03-09T20:52:56.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:56 vm09.local ceph-mon[54524]: pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' is full (reached quota's max_bytes: 100 B) 2026-03-09T20:52:56.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:56 vm09.local ceph-mon[54524]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T20:52:56.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:56 vm09.local ceph-mon[54524]: osdmap e771: 8 total, 8 up, 8 in 2026-03-09T20:52:57.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:52:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:52:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:57 vm05.local ceph-mon[61345]: pgmap v1681: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 963 B/s rd, 770 B/s wr, 1 op/s 2026-03-09T20:52:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:57 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:57 vm05.local ceph-mon[51870]: pgmap v1681: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 963 B/s rd, 770 B/s wr, 1 op/s 2026-03-09T20:52:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:57 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:57 vm09.local ceph-mon[54524]: pgmap v1681: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 963 B/s rd, 770 B/s wr, 1 op/s 2026-03-09T20:52:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:57 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:52:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:52:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:52:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:53:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:52:59 vm09.local ceph-mon[54524]: pgmap v1682: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 811 B/s rd, 648 B/s wr, 1 op/s 2026-03-09T20:53:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:52:59 vm05.local ceph-mon[61345]: pgmap v1682: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 811 B/s rd, 648 B/s wr, 1 op/s 2026-03-09T20:53:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:52:59 vm05.local ceph-mon[51870]: pgmap v1682: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 811 B/s rd, 648 B/s wr, 1 op/s 2026-03-09T20:53:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:01 vm09.local ceph-mon[54524]: pgmap v1683: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:53:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:01 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:01 vm05.local ceph-mon[61345]: pgmap v1683: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:53:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:01 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:01 vm05.local ceph-mon[51870]: pgmap v1683: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-09T20:53:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:01 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:04.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:03 vm09.local ceph-mon[54524]: pgmap v1684: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 440 B/s wr, 1 op/s 2026-03-09T20:53:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:03 vm05.local ceph-mon[61345]: pgmap v1684: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 440 B/s wr, 1 op/s 2026-03-09T20:53:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:03 vm05.local ceph-mon[51870]: pgmap v1684: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 440 B/s wr, 1 op/s 2026-03-09T20:53:06.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:05 vm09.local ceph-mon[54524]: pgmap v1685: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 409 B/s wr, 1 op/s 2026-03-09T20:53:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:05 vm05.local ceph-mon[61345]: pgmap v1685: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 409 B/s wr, 1 op/s 2026-03-09T20:53:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:05 vm05.local ceph-mon[51870]: pgmap v1685: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 409 B/s wr, 1 op/s 2026-03-09T20:53:06.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:06 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:06.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:06 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:06.973 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:53:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:53:06.973 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:06 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:53:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:07 vm09.local ceph-mon[54524]: pgmap v1686: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 992 B/s rd, 0 op/s 2026-03-09T20:53:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:07 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:07 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:07 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:07 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:08.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:07 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:07 vm05.local ceph-mon[61345]: pgmap v1686: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 992 B/s rd, 0 op/s 2026-03-09T20:53:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:07 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:07 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:07 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:07 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:07 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:07 vm05.local ceph-mon[51870]: pgmap v1686: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 992 B/s rd, 0 op/s 2026-03-09T20:53:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:07 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:07 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:07 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:07 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:07 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:08.772 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:53:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:53:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:53:09.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:08 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:09.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:08 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:09.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:08 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:09.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:08 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:09.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:08 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:09.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:08 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:09.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:08 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:53:09.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:08 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:53:09.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:08 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:09 vm05.local ceph-mon[61345]: pgmap v1687: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:09 vm05.local ceph-mon[51870]: pgmap v1687: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:10.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:09 vm09.local ceph-mon[54524]: pgmap v1687: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:11 vm09.local ceph-mon[54524]: pgmap v1688: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:12.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:11 vm05.local ceph-mon[61345]: pgmap v1688: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:12.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:11 vm05.local ceph-mon[51870]: pgmap v1688: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:14.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:14 vm09.local ceph-mon[54524]: pgmap v1689: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:14.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:14 vm05.local ceph-mon[61345]: pgmap v1689: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:14.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:14 vm05.local ceph-mon[51870]: pgmap v1689: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:16 vm09.local ceph-mon[54524]: pgmap v1690: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:16.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:16 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:16 vm05.local ceph-mon[61345]: pgmap v1690: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:16.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:16 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:16 vm05.local ceph-mon[51870]: pgmap v1690: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:16.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:16 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:17.258 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:53:16 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:53:17.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:17 vm09.local ceph-mon[54524]: pgmap v1691: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:17.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:17 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:17 vm05.local ceph-mon[61345]: pgmap v1691: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:17.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:17 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:17 vm05.local ceph-mon[51870]: pgmap v1691: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:17.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:17 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:53:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:53:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:53:20.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:19 vm09.local ceph-mon[54524]: pgmap v1692: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:20.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:19 vm05.local ceph-mon[61345]: pgmap v1692: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:20.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:19 vm05.local ceph-mon[51870]: pgmap v1692: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:22.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:21 vm09.local ceph-mon[54524]: pgmap v1693: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:22.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:21 vm05.local ceph-mon[61345]: pgmap v1693: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:22.175 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:21 vm05.local ceph-mon[51870]: pgmap v1693: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:23.373 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=169272 2026-03-09T20:53:23.374 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 27ffa175-ba53-4b7b-afd8-5d830c8341ae max_bytes 0 2026-03-09T20:53:23.374 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put two /etc/passwd 2026-03-09T20:53:23.431 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.430+0000 7f88e61d6640 1 -- 192.168.123.105:0/1228768148 >> v1:192.168.123.105:6789/0 conn(0x7f88e00754a0 legacy=0x7f88e0075880 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:23.431 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.430+0000 7f88e61d6640 1 -- 192.168.123.105:0/1228768148 shutdown_connections 2026-03-09T20:53:23.431 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.430+0000 7f88e61d6640 1 -- 192.168.123.105:0/1228768148 >> 192.168.123.105:0/1228768148 conn(0x7f88e00fe3b0 msgr2=0x7f88e01007d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:23.431 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.430+0000 7f88e61d6640 1 -- 192.168.123.105:0/1228768148 shutdown_connections 2026-03-09T20:53:23.431 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.430+0000 7f88e61d6640 1 -- 192.168.123.105:0/1228768148 wait complete. 2026-03-09T20:53:23.431 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.430+0000 7f88e61d6640 1 Processor -- start 2026-03-09T20:53:23.431 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.431+0000 7f88e61d6640 1 -- start start 2026-03-09T20:53:23.432 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.431+0000 7f88e61d6640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f88e01ab790 con 0x7f88e00754a0 2026-03-09T20:53:23.432 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.431+0000 7f88e61d6640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f88e01ac990 con 0x7f88e01113e0 2026-03-09T20:53:23.432 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.431+0000 7f88e61d6640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f88e01adb90 con 0x7f88e0075f90 2026-03-09T20:53:23.432 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.431+0000 7f88deffd640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f88e0075f90 0x7f88e01a66b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:46642/0 (socket says 192.168.123.105:46642) 2026-03-09T20:53:23.432 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.431+0000 7f88deffd640 1 -- 192.168.123.105:0/1801529060 learned_addr learned my addr 192.168.123.105:0/1801529060 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:53:23.432 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.431+0000 7f88df7fe640 1 --1- 192.168.123.105:0/1801529060 >> v1:192.168.123.105:6789/0 conn(0x7f88e00754a0 0x7f88e010f8a0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:47170/0 (socket says 192.168.123.105:47170) 2026-03-09T20:53:23.432 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.431+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4141796796 0 0) 0x7f88e01adb90 con 0x7f88e0075f90 2026-03-09T20:53:23.432 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.431+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f88b0003620 con 0x7f88e0075f90 2026-03-09T20:53:23.432 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.431+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 294810596 0 0) 0x7f88e01ac990 con 0x7f88e01113e0 2026-03-09T20:53:23.432 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.431+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f88e01adb90 con 0x7f88e01113e0 2026-03-09T20:53:23.432 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.431+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2850872200 0 0) 0x7f88e01ab790 con 0x7f88e00754a0 2026-03-09T20:53:23.432 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.432+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f88e01ac990 con 0x7f88e00754a0 2026-03-09T20:53:23.432 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.432+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 744473741 0 0) 0x7f88b0003620 con 0x7f88e0075f90 2026-03-09T20:53:23.433 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.432+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f88e01ab790 con 0x7f88e0075f90 2026-03-09T20:53:23.433 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.432+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3362048159 0 0) 0x7f88e01adb90 con 0x7f88e01113e0 2026-03-09T20:53:23.433 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.432+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f88b0003620 con 0x7f88e01113e0 2026-03-09T20:53:23.433 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.432+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 506884237 0 0) 0x7f88e01ac990 con 0x7f88e00754a0 2026-03-09T20:53:23.433 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.432+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f88e01adb90 con 0x7f88e00754a0 2026-03-09T20:53:23.433 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.432+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f88d00031a0 con 0x7f88e0075f90 2026-03-09T20:53:23.433 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.432+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f88d4003420 con 0x7f88e01113e0 2026-03-09T20:53:23.433 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.432+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f88cc002dc0 con 0x7f88e00754a0 2026-03-09T20:53:23.433 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.432+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3944315055 0 0) 0x7f88e01ab790 con 0x7f88e0075f90 2026-03-09T20:53:23.433 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.432+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 >> v1:192.168.123.109:6789/0 conn(0x7f88e01113e0 legacy=0x7f88e01a9e90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:23.433 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.433+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 >> v1:192.168.123.105:6789/0 conn(0x7f88e00754a0 legacy=0x7f88e010f8a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:23.433 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.433+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f88e01aed90 con 0x7f88e0075f90 2026-03-09T20:53:23.434 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.433+0000 7f88e61d6640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f88e01acbc0 con 0x7f88e0075f90 2026-03-09T20:53:23.434 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.433+0000 7f88e61d6640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f88e01ad0e0 con 0x7f88e0075f90 2026-03-09T20:53:23.434 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.433+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f88d0003500 con 0x7f88e0075f90 2026-03-09T20:53:23.434 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.433+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f88d0005c80 con 0x7f88e0075f90 2026-03-09T20:53:23.435 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.435+0000 7f88c27fc640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f88e0076b50 con 0x7f88e0075f90 2026-03-09T20:53:23.436 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.435+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7f88d00039c0 con 0x7f88e0075f90 2026-03-09T20:53:23.436 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.435+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(771..771 src has 1..771) ==== 7794+0+0 (unknown 3166940157 0 0) 0x7f88d0095610 con 0x7f88e0075f90 2026-03-09T20:53:23.436 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.435+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=772}) -- 0x7f88e01ab790 con 0x7f88e0075f90 2026-03-09T20:53:23.439 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.438+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f88d0061c40 con 0x7f88e0075f90 2026-03-09T20:53:23.535 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.534+0000 7f88c27fc640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"} v 0) -- 0x7f88e010de70 con 0x7f88e0075f90 2026-03-09T20:53:23.957 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.956+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.2 v1:192.168.123.105:6790/0 10 ==== osd_map(772..772 src has 1..772) ==== 628+0+0 (unknown 867233010 0 0) 0x7f88d0059ba0 con 0x7f88e0075f90 2026-03-09T20:53:23.957 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.956+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=773}) -- 0x7f88e01adb90 con 0x7f88e0075f90 2026-03-09T20:53:23.962 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:23.961+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.2 v1:192.168.123.105:6790/0 11 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v772) ==== 217+0+0 (unknown 2319595497 0 0) 0x7f88d0066b80 con 0x7f88e0075f90 2026-03-09T20:53:24.027 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:24.025+0000 7f88c27fc640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"} v 0) -- 0x7f88e0115ef0 con 0x7f88e0075f90 2026-03-09T20:53:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:23 vm09.local ceph-mon[54524]: pgmap v1694: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:23 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1801529060' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:53:24.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:23 vm09.local ceph-mon[54524]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:53:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:23 vm05.local ceph-mon[51870]: pgmap v1694: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:23 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1801529060' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:53:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:23 vm05.local ceph-mon[51870]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:53:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:23 vm05.local ceph-mon[61345]: pgmap v1694: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:23 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1801529060' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:53:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:23 vm05.local ceph-mon[61345]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:53:24.957 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:24.956+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.2 v1:192.168.123.105:6790/0 12 ==== osd_map(773..773 src has 1..773) ==== 628+0+0 (unknown 3841033131 0 0) 0x7f88d00933c0 con 0x7f88e0075f90 2026-03-09T20:53:24.957 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:24.956+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=774}) -- 0x7f88b0003620 con 0x7f88e0075f90 2026-03-09T20:53:24.964 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:24.963+0000 7f88dcff9640 1 -- 192.168.123.105:0/1801529060 <== mon.2 v1:192.168.123.105:6790/0 13 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v773) ==== 217+0+0 (unknown 1528379943 0 0) 0x7f88d00593e0 con 0x7f88e0075f90 2026-03-09T20:53:24.964 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_bytes = 0 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae 2026-03-09T20:53:24.966 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:24.965+0000 7f88c27fc640 1 -- 192.168.123.105:0/1801529060 >> v1:192.168.123.105:6800/1903060503 conn(0x7f88b0081210 legacy=0x7f88b00836d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:24.966 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:24.965+0000 7f88c27fc640 1 -- 192.168.123.105:0/1801529060 >> v1:192.168.123.105:6790/0 conn(0x7f88e0075f90 legacy=0x7f88e01a66b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:24.966 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:24.965+0000 7f88c27fc640 1 -- 192.168.123.105:0/1801529060 shutdown_connections 2026-03-09T20:53:24.966 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:24.965+0000 7f88c27fc640 1 -- 192.168.123.105:0/1801529060 >> 192.168.123.105:0/1801529060 conn(0x7f88e00fe3b0 msgr2=0x7f88e0113880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:24.966 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:24.965+0000 7f88c27fc640 1 -- 192.168.123.105:0/1801529060 shutdown_connections 2026-03-09T20:53:24.966 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:24.965+0000 7f88c27fc640 1 -- 192.168.123.105:0/1801529060 wait complete. 2026-03-09T20:53:24.977 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 27ffa175-ba53-4b7b-afd8-5d830c8341ae max_objects 0 2026-03-09T20:53:25.043 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.042+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/3929356593 >> v1:192.168.123.105:6789/0 conn(0x7f7cf00a42a0 legacy=0x7f7cf00a6740 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:25.043 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.043+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/3929356593 shutdown_connections 2026-03-09T20:53:25.043 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.043+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/3929356593 >> 192.168.123.105:0/3929356593 conn(0x7f7cf0093500 msgr2=0x7f7cf0095920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:25.043 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.043+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/3929356593 shutdown_connections 2026-03-09T20:53:25.043 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.043+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/3929356593 wait complete. 2026-03-09T20:53:25.044 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.043+0000 7f7cfb6b7640 1 Processor -- start 2026-03-09T20:53:25.044 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.043+0000 7f7cfb6b7640 1 -- start start 2026-03-09T20:53:25.044 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.043+0000 7f7cfb6b7640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7cf013e830 con 0x7f7cf00a42a0 2026-03-09T20:53:25.044 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.043+0000 7f7cfb6b7640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7cf013fa30 con 0x7f7cf00a06d0 2026-03-09T20:53:25.044 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.043+0000 7f7cfb6b7640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7cf0140c30 con 0x7f7cf009d830 2026-03-09T20:53:25.044 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.043+0000 7f7cf9eb4640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f7cf00a06d0 0x7f7cf0139570 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:37312/0 (socket says 192.168.123.105:37312) 2026-03-09T20:53:25.044 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.043+0000 7f7cf9eb4640 1 -- 192.168.123.105:0/4212788258 learned_addr learned my addr 192.168.123.105:0/4212788258 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:53:25.044 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 104810718 0 0) 0x7f7cf013fa30 con 0x7f7cf00a06d0 2026-03-09T20:53:25.044 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7cd4003620 con 0x7f7cf00a06d0 2026-03-09T20:53:25.044 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1145560831 0 0) 0x7f7cf013e830 con 0x7f7cf00a42a0 2026-03-09T20:53:25.044 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7cf013fa30 con 0x7f7cf00a42a0 2026-03-09T20:53:25.045 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2051007357 0 0) 0x7f7cf013fa30 con 0x7f7cf00a42a0 2026-03-09T20:53:25.045 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f7cf013e830 con 0x7f7cf00a42a0 2026-03-09T20:53:25.045 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f7ce8003160 con 0x7f7cf00a42a0 2026-03-09T20:53:25.045 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 644833556 0 0) 0x7f7cf0140c30 con 0x7f7cf009d830 2026-03-09T20:53:25.045 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7cf013fa30 con 0x7f7cf009d830 2026-03-09T20:53:25.045 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 759097924 0 0) 0x7f7cd4003620 con 0x7f7cf00a06d0 2026-03-09T20:53:25.045 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f7cf0140c30 con 0x7f7cf00a06d0 2026-03-09T20:53:25.045 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3144651224 0 0) 0x7f7cf013fa30 con 0x7f7cf009d830 2026-03-09T20:53:25.045 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f7cd4003620 con 0x7f7cf009d830 2026-03-09T20:53:25.045 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f7ce00031d0 con 0x7f7cf00a06d0 2026-03-09T20:53:25.045 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f7cec002ef0 con 0x7f7cf009d830 2026-03-09T20:53:25.046 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.044+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2873677510 0 0) 0x7f7cd4003620 con 0x7f7cf009d830 2026-03-09T20:53:25.046 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.046+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 >> v1:192.168.123.109:6789/0 conn(0x7f7cf00a06d0 legacy=0x7f7cf0139570 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:25.046 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.046+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 >> v1:192.168.123.105:6789/0 conn(0x7f7cf00a42a0 legacy=0x7f7cf013cf30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:25.046 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.046+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7cf0141e30 con 0x7f7cf009d830 2026-03-09T20:53:25.047 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.046+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f7cf013ea60 con 0x7f7cf009d830 2026-03-09T20:53:25.047 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.046+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f7cf013f020 con 0x7f7cf009d830 2026-03-09T20:53:25.047 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.046+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f7cec004470 con 0x7f7cf009d830 2026-03-09T20:53:25.047 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.046+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f7cec0048e0 con 0x7f7cf009d830 2026-03-09T20:53:25.048 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.048+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7f7cec01cfd0 con 0x7f7cf009d830 2026-03-09T20:53:25.049 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.048+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(773..773 src has 1..773) ==== 7794+0+0 (unknown 2081439000 0 0) 0x7f7cec094f90 con 0x7f7cf009d830 2026-03-09T20:53:25.052 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.048+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=774}) -- 0x7f7cd4003620 con 0x7f7cf009d830 2026-03-09T20:53:25.052 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.048+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7cc4005180 con 0x7f7cf009d830 2026-03-09T20:53:25.052 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.051+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f7cec0615c0 con 0x7f7cf009d830 2026-03-09T20:53:25.147 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.146+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"} v 0) -- 0x7f7cc4005470 con 0x7f7cf009d830 2026-03-09T20:53:25.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:24 vm09.local ceph-mon[54524]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T20:53:25.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:24 vm09.local ceph-mon[54524]: osdmap e772: 8 total, 8 up, 8 in 2026-03-09T20:53:25.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:24 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1801529060' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:53:25.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:24 vm09.local ceph-mon[54524]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:53:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:24 vm05.local ceph-mon[61345]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T20:53:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:24 vm05.local ceph-mon[61345]: osdmap e772: 8 total, 8 up, 8 in 2026-03-09T20:53:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:24 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1801529060' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:53:25.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:24 vm05.local ceph-mon[61345]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:53:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:24 vm05.local ceph-mon[51870]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T20:53:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:24 vm05.local ceph-mon[51870]: osdmap e772: 8 total, 8 up, 8 in 2026-03-09T20:53:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:24 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1801529060' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:53:25.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:24 vm05.local ceph-mon[51870]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:53:25.988 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.987+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.2 v1:192.168.123.105:6790/0 10 ==== osd_map(774..774 src has 1..774) ==== 628+0+0 (unknown 3133105487 0 0) 0x7f7cec059520 con 0x7f7cf009d830 2026-03-09T20:53:25.988 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.987+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=775}) -- 0x7f7cf013fa30 con 0x7f7cf009d830 2026-03-09T20:53:25.995 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:25.994+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.2 v1:192.168.123.105:6790/0 11 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v774) ==== 221+0+0 (unknown 1830232 0 0) 0x7f7cec066500 con 0x7f7cf009d830 2026-03-09T20:53:26.053 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.052+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"} v 0) -- 0x7f7cc40028a0 con 0x7f7cf009d830 2026-03-09T20:53:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:25 vm09.local ceph-mon[54524]: pgmap v1696: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:53:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:25 vm09.local ceph-mon[54524]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T20:53:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:25 vm09.local ceph-mon[54524]: osdmap e773: 8 total, 8 up, 8 in 2026-03-09T20:53:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:25 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4212788258' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:53:26.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:25 vm09.local ceph-mon[54524]: from='client.49754 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:53:26.366 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.365+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.2 v1:192.168.123.105:6790/0 12 ==== osd_map(775..775 src has 1..775) ==== 628+0+0 (unknown 295129626 0 0) 0x7f7cec092d40 con 0x7f7cf009d830 2026-03-09T20:53:26.366 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.365+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=776}) -- 0x7f7cd40857f0 con 0x7f7cf009d830 2026-03-09T20:53:26.371 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.371+0000 7f7cdf7fe640 1 -- 192.168.123.105:0/4212788258 <== mon.2 v1:192.168.123.105:6790/0 13 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v775) ==== 221+0+0 (unknown 2152414393 0 0) 0x7f7cec058d60 con 0x7f7cf009d830 2026-03-09T20:53:26.372 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_objects = 0 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae 2026-03-09T20:53:26.374 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.373+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/4212788258 >> v1:192.168.123.105:6800/1903060503 conn(0x7f7cd40785b0 legacy=0x7f7cd407aa70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:26.374 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.373+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/4212788258 >> v1:192.168.123.105:6790/0 conn(0x7f7cf009d830 legacy=0x7f7cf00a3b70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:26.374 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.374+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/4212788258 shutdown_connections 2026-03-09T20:53:26.374 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.374+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/4212788258 >> 192.168.123.105:0/4212788258 conn(0x7f7cf0093500 msgr2=0x7f7cf0094f60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:26.375 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.374+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/4212788258 shutdown_connections 2026-03-09T20:53:26.375 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.374+0000 7f7cfb6b7640 1 -- 192.168.123.105:0/4212788258 wait complete. 2026-03-09T20:53:26.384 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 169272 2026-03-09T20:53:26.384 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 0 -ne 0 ']' 2026-03-09T20:53:26.384 INFO:tasks.workunit.client.0.vm05.stderr:+ true 2026-03-09T20:53:26.384 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put three /etc/passwd 2026-03-09T20:53:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:25 vm05.local ceph-mon[51870]: pgmap v1696: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:53:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:25 vm05.local ceph-mon[51870]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T20:53:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:25 vm05.local ceph-mon[51870]: osdmap e773: 8 total, 8 up, 8 in 2026-03-09T20:53:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:25 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4212788258' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:53:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:25 vm05.local ceph-mon[51870]: from='client.49754 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:53:26.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:25 vm05.local ceph-mon[61345]: pgmap v1696: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:53:26.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:25 vm05.local ceph-mon[61345]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T20:53:26.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:25 vm05.local ceph-mon[61345]: osdmap e773: 8 total, 8 up, 8 in 2026-03-09T20:53:26.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:25 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4212788258' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:53:26.411 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:25 vm05.local ceph-mon[61345]: from='client.49754 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:53:26.416 INFO:tasks.workunit.client.0.vm05.stderr:++ uuidgen 2026-03-09T20:53:26.417 INFO:tasks.workunit.client.0.vm05.stderr:+ pp=7e4fbacd-bf45-40ed-8505-6e93c7ca9219 2026-03-09T20:53:26.417 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool create 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 12 2026-03-09T20:53:26.471 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.470+0000 7f3a2757e640 1 -- 192.168.123.105:0/3521548384 >> v1:192.168.123.105:6789/0 conn(0x7f3a2010f0f0 legacy=0x7f3a20111590 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:26.471 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.470+0000 7f3a2757e640 1 -- 192.168.123.105:0/3521548384 shutdown_connections 2026-03-09T20:53:26.471 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.470+0000 7f3a2757e640 1 -- 192.168.123.105:0/3521548384 >> 192.168.123.105:0/3521548384 conn(0x7f3a200fe3b0 msgr2=0x7f3a201007d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:26.471 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.470+0000 7f3a2757e640 1 -- 192.168.123.105:0/3521548384 shutdown_connections 2026-03-09T20:53:26.472 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.470+0000 7f3a2757e640 1 -- 192.168.123.105:0/3521548384 wait complete. 2026-03-09T20:53:26.472 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.471+0000 7f3a2757e640 1 Processor -- start 2026-03-09T20:53:26.472 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.471+0000 7f3a2757e640 1 -- start start 2026-03-09T20:53:26.472 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.471+0000 7f3a2757e640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3a201ab9a0 con 0x7f3a2010f0f0 2026-03-09T20:53:26.472 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.471+0000 7f3a2757e640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3a201acba0 con 0x7f3a20108680 2026-03-09T20:53:26.472 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.471+0000 7f3a2757e640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3a201adda0 con 0x7f3a2010b520 2026-03-09T20:53:26.472 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.471+0000 7f3a25af4640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f3a2010f0f0 0x7f3a201aa0a0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:47248/0 (socket says 192.168.123.105:47248) 2026-03-09T20:53:26.472 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.471+0000 7f3a25af4640 1 -- 192.168.123.105:0/1683813297 learned_addr learned my addr 192.168.123.105:0/1683813297 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:53:26.472 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.471+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 163518782 0 0) 0x7f3a201ab9a0 con 0x7f3a2010f0f0 2026-03-09T20:53:26.473 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.472+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f39fc003620 con 0x7f3a2010f0f0 2026-03-09T20:53:26.473 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.472+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4006242779 0 0) 0x7f39fc003620 con 0x7f3a2010f0f0 2026-03-09T20:53:26.473 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.472+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f3a201ab9a0 con 0x7f3a2010f0f0 2026-03-09T20:53:26.473 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.472+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f3a1c003580 con 0x7f3a2010f0f0 2026-03-09T20:53:26.473 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.472+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3013149485 0 0) 0x7f3a201ab9a0 con 0x7f3a2010f0f0 2026-03-09T20:53:26.473 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.472+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 >> v1:192.168.123.105:6790/0 conn(0x7f3a2010b520 legacy=0x7f3a201a68b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:26.473 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.472+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 >> v1:192.168.123.109:6789/0 conn(0x7f3a20108680 legacy=0x7f3a20076c80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:26.473 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.473+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3a201aefa0 con 0x7f3a2010f0f0 2026-03-09T20:53:26.473 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.473+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f3a1c003a00 con 0x7f3a2010f0f0 2026-03-09T20:53:26.474 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.473+0000 7f3a2757e640 1 -- 192.168.123.105:0/1683813297 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f3a201adfd0 con 0x7f3a2010f0f0 2026-03-09T20:53:26.474 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.473+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f3a1c005300 con 0x7f3a2010f0f0 2026-03-09T20:53:26.476 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.474+0000 7f3a2757e640 1 -- 192.168.123.105:0/1683813297 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f3a201ae590 con 0x7f3a2010f0f0 2026-03-09T20:53:26.476 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.475+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7f3a1c0054a0 con 0x7f3a2010f0f0 2026-03-09T20:53:26.477 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.476+0000 7f3a2757e640 1 -- 192.168.123.105:0/1683813297 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3a20103db0 con 0x7f3a2010f0f0 2026-03-09T20:53:26.477 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.476+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(775..775 src has 1..775) ==== 7794+0+0 (unknown 3237649684 0 0) 0x7f3a1c093ce0 con 0x7f3a2010f0f0 2026-03-09T20:53:26.480 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.479+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f3a1c0621c0 con 0x7f3a2010f0f0 2026-03-09T20:53:26.576 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:26.575+0000 7f3a2757e640 1 -- 192.168.123.105:0/1683813297 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool create", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pg_num": 12} v 0) -- 0x7f3a20114e60 con 0x7f3a2010f0f0 2026-03-09T20:53:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:26 vm09.local ceph-mon[54524]: from='client.49754 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]': finished 2026-03-09T20:53:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:26 vm09.local ceph-mon[54524]: osdmap e774: 8 total, 8 up, 8 in 2026-03-09T20:53:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:26 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/4212788258' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:53:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:26 vm09.local ceph-mon[54524]: from='client.49754 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:53:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:26 vm09.local ceph-mon[54524]: pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' no longer out of quota; removing NO_QUOTA flag 2026-03-09T20:53:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:26 vm09.local ceph-mon[54524]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T20:53:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:26 vm09.local ceph-mon[54524]: from='client.49754 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]': finished 2026-03-09T20:53:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:26 vm09.local ceph-mon[54524]: osdmap e775: 8 total, 8 up, 8 in 2026-03-09T20:53:27.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:26 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1683813297' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pg_num": 12}]: dispatch 2026-03-09T20:53:27.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:53:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:53:27.373 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.372+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pg_num": 12}]=0 pool '7e4fbacd-bf45-40ed-8505-6e93c7ca9219' created v776) ==== 176+0+0 (unknown 2493563902 0 0) 0x7f3a1c067100 con 0x7f3a2010f0f0 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[61345]: from='client.49754 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]': finished 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[61345]: osdmap e774: 8 total, 8 up, 8 in 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/4212788258' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[61345]: from='client.49754 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[61345]: pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' no longer out of quota; removing NO_QUOTA flag 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[61345]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[61345]: from='client.49754 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]': finished 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[61345]: osdmap e775: 8 total, 8 up, 8 in 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1683813297' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pg_num": 12}]: dispatch 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[51870]: from='client.49754 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]': finished 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[51870]: osdmap e774: 8 total, 8 up, 8 in 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/4212788258' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[51870]: from='client.49754 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[51870]: pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' no longer out of quota; removing NO_QUOTA flag 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[51870]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[51870]: from='client.49754 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]': finished 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[51870]: osdmap e775: 8 total, 8 up, 8 in 2026-03-09T20:53:27.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:26 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1683813297' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pg_num": 12}]: dispatch 2026-03-09T20:53:27.441 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.440+0000 7f3a2757e640 1 -- 192.168.123.105:0/1683813297 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool create", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pg_num": 12} v 0) -- 0x7f3a201ae880 con 0x7f3a2010f0f0 2026-03-09T20:53:27.441 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.440+0000 7f3a0e7fc640 1 -- 192.168.123.105:0/1683813297 <== mon.0 v1:192.168.123.105:6789/0 11 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pg_num": 12}]=0 pool '7e4fbacd-bf45-40ed-8505-6e93c7ca9219' already exists v776) ==== 183+0+0 (unknown 3465682211 0 0) 0x7f3a1c05a120 con 0x7f3a2010f0f0 2026-03-09T20:53:27.441 INFO:tasks.workunit.client.0.vm05.stderr:pool '7e4fbacd-bf45-40ed-8505-6e93c7ca9219' already exists 2026-03-09T20:53:27.442 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.442+0000 7f3a2757e640 1 -- 192.168.123.105:0/1683813297 >> v1:192.168.123.105:6800/1903060503 conn(0x7f39fc077f70 legacy=0x7f39fc07a430 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:27.443 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.442+0000 7f3a2757e640 1 -- 192.168.123.105:0/1683813297 >> v1:192.168.123.105:6789/0 conn(0x7f3a2010f0f0 legacy=0x7f3a201aa0a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:27.443 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.442+0000 7f3a2757e640 1 -- 192.168.123.105:0/1683813297 shutdown_connections 2026-03-09T20:53:27.443 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.442+0000 7f3a2757e640 1 -- 192.168.123.105:0/1683813297 >> 192.168.123.105:0/1683813297 conn(0x7f3a200fe3b0 msgr2=0x7f3a2010aac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:27.443 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.442+0000 7f3a2757e640 1 -- 192.168.123.105:0/1683813297 shutdown_connections 2026-03-09T20:53:27.443 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.442+0000 7f3a2757e640 1 -- 192.168.123.105:0/1683813297 wait complete. 2026-03-09T20:53:27.451 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool application enable 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 rados 2026-03-09T20:53:27.503 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.501+0000 7f6f4680b640 1 -- 192.168.123.105:0/2520319950 >> v1:192.168.123.105:6789/0 conn(0x7f6f4010d8c0 legacy=0x7f6f4010fcb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:27.503 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.502+0000 7f6f4680b640 1 -- 192.168.123.105:0/2520319950 shutdown_connections 2026-03-09T20:53:27.503 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.502+0000 7f6f4680b640 1 -- 192.168.123.105:0/2520319950 >> 192.168.123.105:0/2520319950 conn(0x7f6f400fc510 msgr2=0x7f6f400fe930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:27.503 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.502+0000 7f6f4680b640 1 -- 192.168.123.105:0/2520319950 shutdown_connections 2026-03-09T20:53:27.503 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.502+0000 7f6f4680b640 1 -- 192.168.123.105:0/2520319950 wait complete. 2026-03-09T20:53:27.503 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.502+0000 7f6f4680b640 1 Processor -- start 2026-03-09T20:53:27.504 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.502+0000 7f6f4680b640 1 -- start start 2026-03-09T20:53:27.504 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.503+0000 7f6f4680b640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6f401ab6d0 con 0x7f6f40100950 2026-03-09T20:53:27.504 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.503+0000 7f6f4680b640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6f401ac8d0 con 0x7f6f4010d8c0 2026-03-09T20:53:27.504 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.503+0000 7f6f4680b640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6f401adad0 con 0x7f6f401113d0 2026-03-09T20:53:27.504 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.503+0000 7f6f44d81640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f6f401113d0 0x7f6f401a9df0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:46720/0 (socket says 192.168.123.105:46720) 2026-03-09T20:53:27.504 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.503+0000 7f6f44d81640 1 -- 192.168.123.105:0/3885823250 learned_addr learned my addr 192.168.123.105:0/3885823250 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:53:27.504 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.503+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 415848639 0 0) 0x7f6f401adad0 con 0x7f6f401113d0 2026-03-09T20:53:27.504 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.503+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6f18003620 con 0x7f6f401113d0 2026-03-09T20:53:27.505 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.503+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1496503479 0 0) 0x7f6f401ac8d0 con 0x7f6f4010d8c0 2026-03-09T20:53:27.505 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.503+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6f401adad0 con 0x7f6f4010d8c0 2026-03-09T20:53:27.505 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.503+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 82319059 0 0) 0x7f6f401adad0 con 0x7f6f4010d8c0 2026-03-09T20:53:27.505 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.503+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6f401ac8d0 con 0x7f6f4010d8c0 2026-03-09T20:53:27.505 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.503+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6f24004460 con 0x7f6f4010d8c0 2026-03-09T20:53:27.505 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.503+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1650183922 0 0) 0x7f6f18003620 con 0x7f6f401113d0 2026-03-09T20:53:27.505 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.504+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6f401adad0 con 0x7f6f401113d0 2026-03-09T20:53:27.505 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.504+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6f34003580 con 0x7f6f401113d0 2026-03-09T20:53:27.505 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.504+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2817873304 0 0) 0x7f6f401ac8d0 con 0x7f6f4010d8c0 2026-03-09T20:53:27.505 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.504+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 >> v1:192.168.123.105:6790/0 conn(0x7f6f401113d0 legacy=0x7f6f401a9df0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:27.505 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.504+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 >> v1:192.168.123.105:6789/0 conn(0x7f6f40100950 legacy=0x7f6f4010cee0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:27.505 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.504+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6f401aecd0 con 0x7f6f4010d8c0 2026-03-09T20:53:27.505 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.504+0000 7f6f4680b640 1 -- 192.168.123.105:0/3885823250 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f6f401acb00 con 0x7f6f4010d8c0 2026-03-09T20:53:27.506 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.504+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f6f240031d0 con 0x7f6f4010d8c0 2026-03-09T20:53:27.506 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.504+0000 7f6f4680b640 1 -- 192.168.123.105:0/3885823250 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f6f401ad140 con 0x7f6f4010d8c0 2026-03-09T20:53:27.506 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.505+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6f240038e0 con 0x7f6f4010d8c0 2026-03-09T20:53:27.506 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.505+0000 7f6f4680b640 1 -- 192.168.123.105:0/3885823250 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6f04005180 con 0x7f6f4010d8c0 2026-03-09T20:53:27.507 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.506+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7f6f24003b00 con 0x7f6f4010d8c0 2026-03-09T20:53:27.507 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.506+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(776..776 src has 1..776) ==== 8169+0+0 (unknown 2785341443 0 0) 0x7f6f240946a0 con 0x7f6f4010d8c0 2026-03-09T20:53:27.510 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.509+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f6f24060b60 con 0x7f6f4010d8c0 2026-03-09T20:53:27.606 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:27.604+0000 7f6f4680b640 1 -- 192.168.123.105:0/3885823250 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"} v 0) -- 0x7f6f04005470 con 0x7f6f4010d8c0 2026-03-09T20:53:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:27 vm09.local ceph-mon[54524]: pgmap v1700: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:27 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:27 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1683813297' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pg_num": 12}]': finished 2026-03-09T20:53:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:27 vm09.local ceph-mon[54524]: osdmap e776: 8 total, 8 up, 8 in 2026-03-09T20:53:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:27 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1683813297' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pg_num": 12}]: dispatch 2026-03-09T20:53:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:27 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3885823250' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]: dispatch 2026-03-09T20:53:28.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:27 vm09.local ceph-mon[54524]: from='client.50527 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]: dispatch 2026-03-09T20:53:28.385 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:28.384+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]=0 enabled application 'rados' on pool '7e4fbacd-bf45-40ed-8505-6e93c7ca9219' v777) ==== 213+0+0 (unknown 1671297454 0 0) 0x7f6f24065aa0 con 0x7f6f4010d8c0 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:28 vm05.local ceph-mon[61345]: pgmap v1700: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:28 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:28 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1683813297' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pg_num": 12}]': finished 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:28 vm05.local ceph-mon[61345]: osdmap e776: 8 total, 8 up, 8 in 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:28 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1683813297' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pg_num": 12}]: dispatch 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:28 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3885823250' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]: dispatch 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:28 vm05.local ceph-mon[61345]: from='client.50527 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]: dispatch 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:27 vm05.local ceph-mon[51870]: pgmap v1700: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:27 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:27 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1683813297' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pg_num": 12}]': finished 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:27 vm05.local ceph-mon[51870]: osdmap e776: 8 total, 8 up, 8 in 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:27 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1683813297' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pg_num": 12}]: dispatch 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:27 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3885823250' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]: dispatch 2026-03-09T20:53:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:27 vm05.local ceph-mon[51870]: from='client.50527 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]: dispatch 2026-03-09T20:53:28.444 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:28.443+0000 7f6f4680b640 1 -- 192.168.123.105:0/3885823250 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"} v 0) -- 0x7f6f04005d40 con 0x7f6f4010d8c0 2026-03-09T20:53:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:53:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:53:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:53:29.390 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.389+0000 7f6f3d7fa640 1 -- 192.168.123.105:0/3885823250 <== mon.1 v1:192.168.123.109:6789/0 11 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]=0 enabled application 'rados' on pool '7e4fbacd-bf45-40ed-8505-6e93c7ca9219' v778) ==== 213+0+0 (unknown 771744388 0 0) 0x7f6f24058ac0 con 0x7f6f4010d8c0 2026-03-09T20:53:29.390 INFO:tasks.workunit.client.0.vm05.stderr:enabled application 'rados' on pool '7e4fbacd-bf45-40ed-8505-6e93c7ca9219' 2026-03-09T20:53:29.392 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.392+0000 7f6f4680b640 1 -- 192.168.123.105:0/3885823250 >> v1:192.168.123.105:6800/1903060503 conn(0x7f6f180783b0 legacy=0x7f6f1807a870 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:29.393 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.392+0000 7f6f4680b640 1 -- 192.168.123.105:0/3885823250 >> v1:192.168.123.109:6789/0 conn(0x7f6f4010d8c0 legacy=0x7f6f401a66c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:29.393 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.392+0000 7f6f4680b640 1 -- 192.168.123.105:0/3885823250 shutdown_connections 2026-03-09T20:53:29.393 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.392+0000 7f6f4680b640 1 -- 192.168.123.105:0/3885823250 >> 192.168.123.105:0/3885823250 conn(0x7f6f400fc510 msgr2=0x7f6f40113890 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:29.393 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.392+0000 7f6f4680b640 1 -- 192.168.123.105:0/3885823250 shutdown_connections 2026-03-09T20:53:29.393 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.392+0000 7f6f4680b640 1 -- 192.168.123.105:0/3885823250 wait complete. 2026-03-09T20:53:29.403 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 max_objects 10 2026-03-09T20:53:29.456 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.455+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/2924417126 >> v1:192.168.123.105:6789/0 conn(0x7f0e9c10d7b0 legacy=0x7f0e9c10fba0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:29.456 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.455+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/2924417126 shutdown_connections 2026-03-09T20:53:29.456 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.455+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/2924417126 >> 192.168.123.105:0/2924417126 conn(0x7f0e9c1005c0 msgr2=0x7f0e9c1029e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:29.456 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.455+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/2924417126 shutdown_connections 2026-03-09T20:53:29.456 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.456+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/2924417126 wait complete. 2026-03-09T20:53:29.457 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.456+0000 7f0ea3a0b640 1 Processor -- start 2026-03-09T20:53:29.457 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.456+0000 7f0ea3a0b640 1 -- start start 2026-03-09T20:53:29.457 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.456+0000 7f0ea3a0b640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f0e9c1ab790 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.457 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.456+0000 7f0ea3a0b640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f0e9c1ac990 con 0x7f0e9c10a910 2026-03-09T20:53:29.457 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.456+0000 7f0ea3a0b640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f0e9c1adb90 con 0x7f0e9c111380 2026-03-09T20:53:29.457 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.456+0000 7f0ea1f81640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f0e9c111380 0x7f0e9c1a9e90 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:46734/0 (socket says 192.168.123.105:46734) 2026-03-09T20:53:29.457 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.456+0000 7f0ea1f81640 1 -- 192.168.123.105:0/3129070759 learned_addr learned my addr 192.168.123.105:0/3129070759 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:53:29.457 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2933924551 0 0) 0x7f0e9c1ab790 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.457 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0e78003620 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.457 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3587973930 0 0) 0x7f0e9c1ac990 con 0x7f0e9c10a910 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0e9c1ab790 con 0x7f0e9c10a910 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3793261661 0 0) 0x7f0e9c1adb90 con 0x7f0e9c111380 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0e9c1ac990 con 0x7f0e9c111380 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4293650149 0 0) 0x7f0e78003620 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f0e9c1adb90 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2727957156 0 0) 0x7f0e9c1ab790 con 0x7f0e9c10a910 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f0e78003620 con 0x7f0e9c10a910 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f0e8c003300 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f0e90002ca0 con 0x7f0e9c10a910 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3516473737 0 0) 0x7f0e9c1ac990 con 0x7f0e9c111380 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f0e9c1ab790 con 0x7f0e9c111380 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f0e98003310 con 0x7f0e9c111380 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1134567947 0 0) 0x7f0e9c1adb90 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 >> v1:192.168.123.105:6790/0 conn(0x7f0e9c111380 legacy=0x7f0e9c1a9e90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 >> v1:192.168.123.109:6789/0 conn(0x7f0e9c10a910 legacy=0x7f0e9c110a40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.457+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0e9c1aed90 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.458+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/3129070759 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f0e9c1ab9c0 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.458 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.458+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f0e8c004200 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.460 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.458+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f0e8c005220 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.460 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.458+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/3129070759 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f0e9c1abee0 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.463 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.459+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7f0e8c003db0 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.463 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.459+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/3129070759 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0e64005180 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.463 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.459+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(778..778 src has 1..778) ==== 8182+0+0 (unknown 3982587789 0 0) 0x7f0e8c0946b0 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.463 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.462+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f0e8c060b60 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.557 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:29.555+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/3129070759 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"} v 0) -- 0x7f0e64005470 con 0x7f0e9c10d7b0 2026-03-09T20:53:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:29 vm05.local ceph-mon[61345]: from='client.50527 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]': finished 2026-03-09T20:53:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:29 vm05.local ceph-mon[61345]: osdmap e777: 8 total, 8 up, 8 in 2026-03-09T20:53:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:29 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3885823250' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]: dispatch 2026-03-09T20:53:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:29 vm05.local ceph-mon[61345]: from='client.50527 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]: dispatch 2026-03-09T20:53:29.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:29 vm05.local ceph-mon[61345]: pgmap v1703: 188 pgs: 12 unknown, 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:53:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:29 vm05.local ceph-mon[51870]: from='client.50527 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]': finished 2026-03-09T20:53:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:29 vm05.local ceph-mon[51870]: osdmap e777: 8 total, 8 up, 8 in 2026-03-09T20:53:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:29 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3885823250' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]: dispatch 2026-03-09T20:53:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:29 vm05.local ceph-mon[51870]: from='client.50527 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]: dispatch 2026-03-09T20:53:29.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:29 vm05.local ceph-mon[51870]: pgmap v1703: 188 pgs: 12 unknown, 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:53:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:29 vm09.local ceph-mon[54524]: from='client.50527 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]': finished 2026-03-09T20:53:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:29 vm09.local ceph-mon[54524]: osdmap e777: 8 total, 8 up, 8 in 2026-03-09T20:53:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:29 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3885823250' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]: dispatch 2026-03-09T20:53:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:29 vm09.local ceph-mon[54524]: from='client.50527 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]: dispatch 2026-03-09T20:53:29.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:29 vm09.local ceph-mon[54524]: pgmap v1703: 188 pgs: 12 unknown, 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T20:53:30.394 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:30.393+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 v779) ==== 223+0+0 (unknown 3859008132 0 0) 0x7f0e8c065aa0 con 0x7f0e9c10d7b0 2026-03-09T20:53:30.451 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:30.451+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/3129070759 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"} v 0) -- 0x7f0e64005d40 con 0x7f0e9c10d7b0 2026-03-09T20:53:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:30 vm05.local ceph-mon[51870]: from='client.50527 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]': finished 2026-03-09T20:53:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:30 vm05.local ceph-mon[51870]: osdmap e778: 8 total, 8 up, 8 in 2026-03-09T20:53:30.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:30 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3129070759' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:53:30.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:30 vm05.local ceph-mon[61345]: from='client.50527 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]': finished 2026-03-09T20:53:30.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:30 vm05.local ceph-mon[61345]: osdmap e778: 8 total, 8 up, 8 in 2026-03-09T20:53:30.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:30 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3129070759' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:53:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:30 vm09.local ceph-mon[54524]: from='client.50527 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "app": "rados"}]': finished 2026-03-09T20:53:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:30 vm09.local ceph-mon[54524]: osdmap e778: 8 total, 8 up, 8 in 2026-03-09T20:53:30.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:30 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3129070759' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:53:31.406 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:31.405+0000 7f0e8a7fc640 1 -- 192.168.123.105:0/3129070759 <== mon.0 v1:192.168.123.105:6789/0 11 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 v780) ==== 223+0+0 (unknown 490194469 0 0) 0x7f0e8c058ac0 con 0x7f0e9c10d7b0 2026-03-09T20:53:31.406 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_objects = 10 for pool 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 2026-03-09T20:53:31.408 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:31.408+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/3129070759 >> v1:192.168.123.105:6800/1903060503 conn(0x7f0e78078670 legacy=0x7f0e7807ab30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:31.409 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:31.408+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/3129070759 >> v1:192.168.123.105:6789/0 conn(0x7f0e9c10d7b0 legacy=0x7f0e9c1a6650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:53:31.409 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:31.408+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/3129070759 shutdown_connections 2026-03-09T20:53:31.409 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:31.408+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/3129070759 >> 192.168.123.105:0/3129070759 conn(0x7f0e9c1005c0 msgr2=0x7f0e9c103190 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:53:31.409 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:31.408+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/3129070759 shutdown_connections 2026-03-09T20:53:31.409 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:53:31.408+0000 7f0ea3a0b640 1 -- 192.168.123.105:0/3129070759 wait complete. 2026-03-09T20:53:31.423 INFO:tasks.workunit.client.0.vm05.stderr:+ sleep 30 2026-03-09T20:53:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:31 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3129070759' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]': finished 2026-03-09T20:53:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:31 vm05.local ceph-mon[51870]: osdmap e779: 8 total, 8 up, 8 in 2026-03-09T20:53:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:31 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3129070759' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:53:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:31 vm05.local ceph-mon[51870]: pgmap v1706: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T20:53:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:31 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:31.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:31 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:31 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3129070759' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]': finished 2026-03-09T20:53:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:31 vm05.local ceph-mon[61345]: osdmap e779: 8 total, 8 up, 8 in 2026-03-09T20:53:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:31 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3129070759' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:53:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:31 vm05.local ceph-mon[61345]: pgmap v1706: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T20:53:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:31 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:31.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:31 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:31 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3129070759' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]': finished 2026-03-09T20:53:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:31 vm09.local ceph-mon[54524]: osdmap e779: 8 total, 8 up, 8 in 2026-03-09T20:53:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:31 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3129070759' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]: dispatch 2026-03-09T20:53:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:31 vm09.local ceph-mon[54524]: pgmap v1706: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-09T20:53:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:31 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:53:31.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:31 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:32.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:32 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3129070759' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]': finished 2026-03-09T20:53:32.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:32 vm05.local ceph-mon[51870]: osdmap e780: 8 total, 8 up, 8 in 2026-03-09T20:53:32.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:32 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3129070759' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]': finished 2026-03-09T20:53:32.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:32 vm05.local ceph-mon[61345]: osdmap e780: 8 total, 8 up, 8 in 2026-03-09T20:53:32.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:32 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3129070759' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "field": "max_objects", "val": "10"}]': finished 2026-03-09T20:53:32.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:32 vm09.local ceph-mon[54524]: osdmap e780: 8 total, 8 up, 8 in 2026-03-09T20:53:33.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:33 vm09.local ceph-mon[54524]: pgmap v1708: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 954 B/s wr, 1 op/s 2026-03-09T20:53:33.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:33 vm05.local ceph-mon[51870]: pgmap v1708: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 954 B/s wr, 1 op/s 2026-03-09T20:53:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:33 vm05.local ceph-mon[61345]: pgmap v1708: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 954 B/s wr, 1 op/s 2026-03-09T20:53:36.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:35 vm09.local ceph-mon[54524]: pgmap v1709: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 682 B/s wr, 1 op/s 2026-03-09T20:53:36.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:35 vm05.local ceph-mon[61345]: pgmap v1709: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 682 B/s wr, 1 op/s 2026-03-09T20:53:36.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:35 vm05.local ceph-mon[51870]: pgmap v1709: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 682 B/s wr, 1 op/s 2026-03-09T20:53:37.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:53:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:53:37.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:37 vm05.local ceph-mon[61345]: pgmap v1710: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 561 B/s wr, 1 op/s 2026-03-09T20:53:37.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:37 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:37 vm05.local ceph-mon[51870]: pgmap v1710: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 561 B/s wr, 1 op/s 2026-03-09T20:53:37.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:37 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:37 vm09.local ceph-mon[54524]: pgmap v1710: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 561 B/s wr, 1 op/s 2026-03-09T20:53:37.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:37 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:53:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:53:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:53:40.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:39 vm09.local ceph-mon[54524]: pgmap v1711: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 618 B/s rd, 0 op/s 2026-03-09T20:53:40.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:39 vm05.local ceph-mon[61345]: pgmap v1711: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 618 B/s rd, 0 op/s 2026-03-09T20:53:40.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:39 vm05.local ceph-mon[51870]: pgmap v1711: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 618 B/s rd, 0 op/s 2026-03-09T20:53:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:41 vm09.local ceph-mon[54524]: pgmap v1712: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:53:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:41 vm05.local ceph-mon[61345]: pgmap v1712: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:53:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:41 vm05.local ceph-mon[51870]: pgmap v1712: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:53:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:43 vm09.local ceph-mon[54524]: pgmap v1713: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 908 B/s rd, 0 op/s 2026-03-09T20:53:44.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:43 vm05.local ceph-mon[61345]: pgmap v1713: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 908 B/s rd, 0 op/s 2026-03-09T20:53:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:43 vm05.local ceph-mon[51870]: pgmap v1713: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 908 B/s rd, 0 op/s 2026-03-09T20:53:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:45 vm09.local ceph-mon[54524]: pgmap v1714: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:45 vm05.local ceph-mon[61345]: pgmap v1714: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:45 vm05.local ceph-mon[51870]: pgmap v1714: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:47.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:46 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:47.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:46 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:47.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:46 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:53:47.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:53:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:53:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:47 vm05.local ceph-mon[61345]: pgmap v1715: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:47 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:47 vm05.local ceph-mon[51870]: pgmap v1715: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:47 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:47 vm09.local ceph-mon[54524]: pgmap v1715: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:48.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:47 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:53:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:53:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:53:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:49 vm05.local ceph-mon[61345]: pgmap v1716: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:49 vm05.local ceph-mon[51870]: pgmap v1716: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:50.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:49 vm09.local ceph-mon[54524]: pgmap v1716: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:51 vm05.local ceph-mon[61345]: pgmap v1717: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:51 vm05.local ceph-mon[51870]: pgmap v1717: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:52.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:51 vm09.local ceph-mon[54524]: pgmap v1717: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:53 vm05.local ceph-mon[61345]: pgmap v1718: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:53 vm05.local ceph-mon[51870]: pgmap v1718: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:53 vm09.local ceph-mon[54524]: pgmap v1718: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:55 vm05.local ceph-mon[61345]: pgmap v1719: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:55 vm05.local ceph-mon[51870]: pgmap v1719: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:55 vm09.local ceph-mon[54524]: pgmap v1719: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:53:57.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:53:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:53:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:57 vm05.local ceph-mon[61345]: pgmap v1720: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:57.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:57 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:57 vm05.local ceph-mon[51870]: pgmap v1720: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:57.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:57 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:57 vm09.local ceph-mon[54524]: pgmap v1720: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:53:57.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:57 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:53:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:53:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:53:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:54:00.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:53:59 vm09.local ceph-mon[54524]: pgmap v1721: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:53:59 vm05.local ceph-mon[61345]: pgmap v1721: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:53:59 vm05.local ceph-mon[51870]: pgmap v1721: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:01.425 INFO:tasks.workunit.client.0.vm05.stderr:++ seq 1 10 2026-03-09T20:54:01.425 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:54:01.426 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 put obj1 /etc/passwd 2026-03-09T20:54:01.451 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:54:01.451 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 put obj2 /etc/passwd 2026-03-09T20:54:01.476 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:54:01.476 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 put obj3 /etc/passwd 2026-03-09T20:54:01.501 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:54:01.501 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 put obj4 /etc/passwd 2026-03-09T20:54:01.525 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:54:01.526 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 put obj5 /etc/passwd 2026-03-09T20:54:01.550 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:54:01.550 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 put obj6 /etc/passwd 2026-03-09T20:54:01.574 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:54:01.574 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 put obj7 /etc/passwd 2026-03-09T20:54:01.598 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:54:01.598 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 put obj8 /etc/passwd 2026-03-09T20:54:01.623 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:54:01.623 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 put obj9 /etc/passwd 2026-03-09T20:54:01.647 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-09T20:54:01.647 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 put obj10 /etc/passwd 2026-03-09T20:54:01.671 INFO:tasks.workunit.client.0.vm05.stderr:+ sleep 30 2026-03-09T20:54:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:01 vm09.local ceph-mon[54524]: pgmap v1722: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:02.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:01 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:01 vm05.local ceph-mon[51870]: pgmap v1722: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:01 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:01 vm05.local ceph-mon[61345]: pgmap v1722: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:01 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:04.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:03 vm09.local ceph-mon[54524]: pgmap v1723: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:04.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:03 vm05.local ceph-mon[61345]: pgmap v1723: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:03 vm05.local ceph-mon[51870]: pgmap v1723: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:06.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:05 vm09.local ceph-mon[54524]: pgmap v1724: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:05 vm05.local ceph-mon[61345]: pgmap v1724: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:05 vm05.local ceph-mon[51870]: pgmap v1724: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:07.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:54:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:54:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:07 vm05.local ceph-mon[51870]: pgmap v1725: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T20:54:08.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:07 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:07 vm05.local ceph-mon[61345]: pgmap v1725: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T20:54:08.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:07 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:07 vm09.local ceph-mon[54524]: pgmap v1725: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T20:54:08.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:07 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:08.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:08 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:54:08.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:08 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:54:08.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:08 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:54:08.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:08 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:54:08.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:08 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:54:08.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:08 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:54:08.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:08 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:54:08.910 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:08 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:54:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:54:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:54:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:54:09.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:08 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:54:09.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:08 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:54:09.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:08 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:54:09.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:08 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:54:10.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:09 vm05.local ceph-mon[61345]: pgmap v1726: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1.7 KiB/s wr, 1 op/s 2026-03-09T20:54:10.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:09 vm05.local ceph-mon[51870]: pgmap v1726: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1.7 KiB/s wr, 1 op/s 2026-03-09T20:54:10.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:09 vm09.local ceph-mon[54524]: pgmap v1726: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1.7 KiB/s wr, 1 op/s 2026-03-09T20:54:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:11 vm05.local ceph-mon[61345]: pgmap v1727: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T20:54:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:11 vm05.local ceph-mon[61345]: pool '7e4fbacd-bf45-40ed-8505-6e93c7ca9219' is full (reached quota's max_objects: 10) 2026-03-09T20:54:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:11 vm05.local ceph-mon[61345]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T20:54:12.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:11 vm05.local ceph-mon[61345]: osdmap e781: 8 total, 8 up, 8 in 2026-03-09T20:54:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:11 vm05.local ceph-mon[51870]: pgmap v1727: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T20:54:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:11 vm05.local ceph-mon[51870]: pool '7e4fbacd-bf45-40ed-8505-6e93c7ca9219' is full (reached quota's max_objects: 10) 2026-03-09T20:54:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:11 vm05.local ceph-mon[51870]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T20:54:12.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:11 vm05.local ceph-mon[51870]: osdmap e781: 8 total, 8 up, 8 in 2026-03-09T20:54:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:11 vm09.local ceph-mon[54524]: pgmap v1727: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-09T20:54:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:11 vm09.local ceph-mon[54524]: pool '7e4fbacd-bf45-40ed-8505-6e93c7ca9219' is full (reached quota's max_objects: 10) 2026-03-09T20:54:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:11 vm09.local ceph-mon[54524]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-09T20:54:12.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:11 vm09.local ceph-mon[54524]: osdmap e781: 8 total, 8 up, 8 in 2026-03-09T20:54:14.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:13 vm05.local ceph-mon[61345]: pgmap v1729: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-09T20:54:14.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:13 vm05.local ceph-mon[51870]: pgmap v1729: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-09T20:54:14.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:13 vm09.local ceph-mon[54524]: pgmap v1729: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-09T20:54:16.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:15 vm05.local ceph-mon[61345]: pgmap v1730: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-09T20:54:16.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:15 vm05.local ceph-mon[51870]: pgmap v1730: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-09T20:54:16.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:15 vm09.local ceph-mon[54524]: pgmap v1730: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-09T20:54:17.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:16 vm09.local ceph-mon[54524]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:54:17.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:16 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:17.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:54:16 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:54:17.409 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:16 vm05.local ceph-mon[61345]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:54:17.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:16 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:16 vm05.local ceph-mon[51870]: from='mgr.24602 ' entity='mgr.y' 2026-03-09T20:54:17.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:16 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:18 vm05.local ceph-mon[61345]: pgmap v1731: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:54:18.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:18 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:17 vm05.local ceph-mon[51870]: pgmap v1731: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:54:18.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:17 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:18.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:18 vm09.local ceph-mon[54524]: pgmap v1731: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:54:18.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:18 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:18.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:54:18 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:54:18] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:54:20.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:19 vm09.local ceph-mon[54524]: pgmap v1732: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:54:20.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:19 vm05.local ceph-mon[61345]: pgmap v1732: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:54:20.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:19 vm05.local ceph-mon[51870]: pgmap v1732: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:54:22.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:22 vm09.local ceph-mon[54524]: pgmap v1733: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:54:22.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:22 vm05.local ceph-mon[61345]: pgmap v1733: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:54:22.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:22 vm05.local ceph-mon[51870]: pgmap v1733: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:54:24.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:24 vm05.local ceph-mon[61345]: pgmap v1734: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 904 B/s rd, 0 op/s 2026-03-09T20:54:24.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:24 vm05.local ceph-mon[51870]: pgmap v1734: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 904 B/s rd, 0 op/s 2026-03-09T20:54:24.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:24 vm09.local ceph-mon[54524]: pgmap v1734: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 904 B/s rd, 0 op/s 2026-03-09T20:54:26.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:26 vm05.local ceph-mon[61345]: pgmap v1735: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:26.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:26 vm05.local ceph-mon[51870]: pgmap v1735: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:26.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:26 vm09.local ceph-mon[54524]: pgmap v1735: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:27.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:54:26 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:54:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:28 vm05.local ceph-mon[61345]: pgmap v1736: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:28.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:28 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:28 vm05.local ceph-mon[51870]: pgmap v1736: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:28.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:28 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:28.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:28 vm09.local ceph-mon[54524]: pgmap v1736: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:28.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:28 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:28.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:54:28 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:54:28] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:54:30.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:30 vm05.local ceph-mon[61345]: pgmap v1737: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:30.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:30 vm05.local ceph-mon[51870]: pgmap v1737: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:30.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:30 vm09.local ceph-mon[54524]: pgmap v1737: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:31.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:31 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:31.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:31 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:31.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:31 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:31.672 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 27ffa175-ba53-4b7b-afd8-5d830c8341ae put threemore /etc/passwd 2026-03-09T20:54:31.700 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 27ffa175-ba53-4b7b-afd8-5d830c8341ae max_bytes 0 2026-03-09T20:54:31.754 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.753+0000 7efd96c2b640 1 -- 192.168.123.105:0/2375022723 >> v1:192.168.123.105:6789/0 conn(0x7efd90107630 legacy=0x7efd90107a10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:31.754 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.753+0000 7efd96c2b640 1 -- 192.168.123.105:0/2375022723 shutdown_connections 2026-03-09T20:54:31.754 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.753+0000 7efd96c2b640 1 -- 192.168.123.105:0/2375022723 >> 192.168.123.105:0/2375022723 conn(0x7efd900fd210 msgr2=0x7efd900ff630 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:31.754 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.753+0000 7efd96c2b640 1 -- 192.168.123.105:0/2375022723 shutdown_connections 2026-03-09T20:54:31.754 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.753+0000 7efd96c2b640 1 -- 192.168.123.105:0/2375022723 wait complete. 2026-03-09T20:54:31.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.754+0000 7efd96c2b640 1 Processor -- start 2026-03-09T20:54:31.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.754+0000 7efd96c2b640 1 -- start start 2026-03-09T20:54:31.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.754+0000 7efd949a0640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7efd90107630 0x7efd90106980 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:43560/0 (socket says 192.168.123.105:43560) 2026-03-09T20:54:31.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.754+0000 7efd87fff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7efd9010a3d0 0x7efd901a22c0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:54198/0 (socket says 192.168.123.105:54198) 2026-03-09T20:54:31.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.754+0000 7efd96c2b640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7efd901a74b0 con 0x7efd9010a3d0 2026-03-09T20:54:31.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.754+0000 7efd96c2b640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7efd901a86b0 con 0x7efd90107630 2026-03-09T20:54:31.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.754+0000 7efd96c2b640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7efd901a98b0 con 0x7efd9010e000 2026-03-09T20:54:31.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.754+0000 7efd949a0640 1 -- 192.168.123.105:0/3761328214 learned_addr learned my addr 192.168.123.105:0/3761328214 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:54:31.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.754+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1102805861 0 0) 0x7efd901a74b0 con 0x7efd9010a3d0 2026-03-09T20:54:31.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.755+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7efd64003620 con 0x7efd9010a3d0 2026-03-09T20:54:31.756 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.755+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3216515787 0 0) 0x7efd901a86b0 con 0x7efd90107630 2026-03-09T20:54:31.756 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.755+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7efd901a74b0 con 0x7efd90107630 2026-03-09T20:54:31.756 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.755+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2490771237 0 0) 0x7efd64003620 con 0x7efd9010a3d0 2026-03-09T20:54:31.756 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.755+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7efd901a86b0 con 0x7efd9010a3d0 2026-03-09T20:54:31.756 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.755+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7efd80003180 con 0x7efd9010a3d0 2026-03-09T20:54:31.756 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.755+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3298078608 0 0) 0x7efd901a86b0 con 0x7efd9010a3d0 2026-03-09T20:54:31.756 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.755+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 >> v1:192.168.123.105:6790/0 conn(0x7efd9010e000 legacy=0x7efd901a5b80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:31.756 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.755+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 >> v1:192.168.123.109:6789/0 conn(0x7efd90107630 legacy=0x7efd90106980 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:31.756 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.755+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7efd901aaab0 con 0x7efd9010a3d0 2026-03-09T20:54:31.756 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.755+0000 7efd96c2b640 1 -- 192.168.123.105:0/3761328214 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7efd901a88e0 con 0x7efd9010a3d0 2026-03-09T20:54:31.756 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.755+0000 7efd96c2b640 1 -- 192.168.123.105:0/3761328214 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7efd901a8ea0 con 0x7efd9010a3d0 2026-03-09T20:54:31.757 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.756+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7efd800034a0 con 0x7efd9010a3d0 2026-03-09T20:54:31.757 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.756+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7efd80005c10 con 0x7efd9010a3d0 2026-03-09T20:54:31.757 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.756+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7efd80006ee0 con 0x7efd9010a3d0 2026-03-09T20:54:31.758 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.757+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(781..781 src has 1..781) ==== 8182+0+0 (unknown 2724359788 0 0) 0x7efd800967e0 con 0x7efd9010a3d0 2026-03-09T20:54:31.758 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.757+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=782}) -- 0x7efd901a86b0 con 0x7efd9010a3d0 2026-03-09T20:54:31.758 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.757+0000 7efd96c2b640 1 -- 192.168.123.105:0/3761328214 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7efd58005180 con 0x7efd9010a3d0 2026-03-09T20:54:31.761 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.760+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7efd80062d10 con 0x7efd9010a3d0 2026-03-09T20:54:31.852 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:31.851+0000 7efd96c2b640 1 -- 192.168.123.105:0/3761328214 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"} v 0) -- 0x7efd58005470 con 0x7efd9010a3d0 2026-03-09T20:54:32.053 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:32.052+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v782) ==== 217+0+0 (unknown 3148415698 0 0) 0x7efd80067c50 con 0x7efd9010a3d0 2026-03-09T20:54:32.055 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:32.054+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 <== mon.0 v1:192.168.123.105:6789/0 11 ==== osd_map(782..782 src has 1..782) ==== 628+0+0 (unknown 1277051319 0 0) 0x7efd8005ac70 con 0x7efd9010a3d0 2026-03-09T20:54:32.055 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:32.054+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=783}) -- 0x7efd901a74b0 con 0x7efd9010a3d0 2026-03-09T20:54:32.120 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:32.119+0000 7efd96c2b640 1 -- 192.168.123.105:0/3761328214 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"} v 0) -- 0x7efd580028a0 con 0x7efd9010a3d0 2026-03-09T20:54:32.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:32 vm05.local ceph-mon[51870]: pgmap v1738: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:32.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:32 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3761328214' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:54:32.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:32 vm05.local ceph-mon[61345]: pgmap v1738: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:32.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:32 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3761328214' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:54:32.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:32 vm09.local ceph-mon[54524]: pgmap v1738: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:32.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:32 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3761328214' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:54:33.082 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.081+0000 7efd85ffb640 1 -- 192.168.123.105:0/3761328214 <== mon.0 v1:192.168.123.105:6789/0 12 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v783) ==== 217+0+0 (unknown 1794248476 0 0) 0x7efd8005aa50 con 0x7efd9010a3d0 2026-03-09T20:54:33.082 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_bytes = 0 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae 2026-03-09T20:54:33.084 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.083+0000 7efd96c2b640 1 -- 192.168.123.105:0/3761328214 >> v1:192.168.123.105:6800/1903060503 conn(0x7efd640780e0 legacy=0x7efd6407a5a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:33.084 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.083+0000 7efd96c2b640 1 -- 192.168.123.105:0/3761328214 >> v1:192.168.123.105:6789/0 conn(0x7efd9010a3d0 legacy=0x7efd901a22c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:33.085 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.085+0000 7efd96c2b640 1 -- 192.168.123.105:0/3761328214 shutdown_connections 2026-03-09T20:54:33.085 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.085+0000 7efd96c2b640 1 -- 192.168.123.105:0/3761328214 >> 192.168.123.105:0/3761328214 conn(0x7efd900fd210 msgr2=0x7efd9006af40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:33.085 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.085+0000 7efd96c2b640 1 -- 192.168.123.105:0/3761328214 shutdown_connections 2026-03-09T20:54:33.085 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.085+0000 7efd96c2b640 1 -- 192.168.123.105:0/3761328214 wait complete. 2026-03-09T20:54:33.099 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 27ffa175-ba53-4b7b-afd8-5d830c8341ae max_objects 0 2026-03-09T20:54:33.153 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.152+0000 7feb8601a640 1 -- 192.168.123.105:0/843520542 >> v1:192.168.123.105:6790/0 conn(0x7feb8010f110 legacy=0x7feb801115b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:33.153 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.153+0000 7feb8601a640 1 -- 192.168.123.105:0/843520542 shutdown_connections 2026-03-09T20:54:33.153 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.153+0000 7feb8601a640 1 -- 192.168.123.105:0/843520542 >> 192.168.123.105:0/843520542 conn(0x7feb800fe3b0 msgr2=0x7feb801007d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:33.153 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.153+0000 7feb8601a640 1 -- 192.168.123.105:0/843520542 shutdown_connections 2026-03-09T20:54:33.153 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.153+0000 7feb8601a640 1 -- 192.168.123.105:0/843520542 wait complete. 2026-03-09T20:54:33.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.153+0000 7feb8601a640 1 Processor -- start 2026-03-09T20:54:33.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.153+0000 7feb8601a640 1 -- start start 2026-03-09T20:54:33.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.153+0000 7feb8601a640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7feb801a96b0 con 0x7feb801086a0 2026-03-09T20:54:33.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.153+0000 7feb8601a640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7feb801aa8b0 con 0x7feb8010b540 2026-03-09T20:54:33.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.153+0000 7feb8601a640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7feb801abab0 con 0x7feb8010f110 2026-03-09T20:54:33.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.153+0000 7feb7f7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7feb801086a0 0x7feb8010e9f0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:54224/0 (socket says 192.168.123.105:54224) 2026-03-09T20:54:33.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.153+0000 7feb7f7fe640 1 -- 192.168.123.105:0/2920547349 learned_addr learned my addr 192.168.123.105:0/2920547349 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:54:33.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.153+0000 7feb7effd640 1 --1- 192.168.123.105:0/2920547349 >> v1:192.168.123.109:6789/0 conn(0x7feb8010b540 0x7feb801a43e0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:43586/0 (socket says 192.168.123.105:43586) 2026-03-09T20:54:33.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.154+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3066458786 0 0) 0x7feb801a96b0 con 0x7feb801086a0 2026-03-09T20:54:33.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.154+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7feb50003620 con 0x7feb801086a0 2026-03-09T20:54:33.155 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.154+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1978022821 0 0) 0x7feb50003620 con 0x7feb801086a0 2026-03-09T20:54:33.155 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.154+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7feb801a96b0 con 0x7feb801086a0 2026-03-09T20:54:33.155 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.154+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7feb70003170 con 0x7feb801086a0 2026-03-09T20:54:33.155 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.154+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3112261471 0 0) 0x7feb801a96b0 con 0x7feb801086a0 2026-03-09T20:54:33.155 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.154+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 >> v1:192.168.123.105:6790/0 conn(0x7feb8010f110 legacy=0x7feb801a7db0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:33.156 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.154+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 >> v1:192.168.123.109:6789/0 conn(0x7feb8010b540 legacy=0x7feb801a43e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:33.156 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.154+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7feb801accb0 con 0x7feb801086a0 2026-03-09T20:54:33.156 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.154+0000 7feb8601a640 1 -- 192.168.123.105:0/2920547349 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7feb801abce0 con 0x7feb801086a0 2026-03-09T20:54:33.156 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.154+0000 7feb8601a640 1 -- 192.168.123.105:0/2920547349 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7feb801ac2a0 con 0x7feb801086a0 2026-03-09T20:54:33.156 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.154+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7feb70003310 con 0x7feb801086a0 2026-03-09T20:54:33.156 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.154+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7feb70005d90 con 0x7feb801086a0 2026-03-09T20:54:33.157 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.156+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7feb70007060 con 0x7feb801086a0 2026-03-09T20:54:33.157 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.156+0000 7feb8601a640 1 -- 192.168.123.105:0/2920547349 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7feb44005180 con 0x7feb801086a0 2026-03-09T20:54:33.157 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.156+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(783..783 src has 1..783) ==== 8182+0+0 (unknown 4230771921 0 0) 0x7feb700967f0 con 0x7feb801086a0 2026-03-09T20:54:33.162 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.156+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=784}) -- 0x7feb801a96b0 con 0x7feb801086a0 2026-03-09T20:54:33.162 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.161+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7feb70062cc0 con 0x7feb801086a0 2026-03-09T20:54:33.254 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:33.253+0000 7feb8601a640 1 -- 192.168.123.105:0/2920547349 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"} v 0) -- 0x7feb44005470 con 0x7feb801086a0 2026-03-09T20:54:33.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:33 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3761328214' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T20:54:33.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:33 vm05.local ceph-mon[61345]: osdmap e782: 8 total, 8 up, 8 in 2026-03-09T20:54:33.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:33 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3761328214' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:54:33.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:33 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3761328214' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T20:54:33.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:33 vm05.local ceph-mon[51870]: osdmap e782: 8 total, 8 up, 8 in 2026-03-09T20:54:33.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:33 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3761328214' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:54:33.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:33 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3761328214' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T20:54:33.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:33 vm09.local ceph-mon[54524]: osdmap e782: 8 total, 8 up, 8 in 2026-03-09T20:54:33.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:33 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3761328214' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-09T20:54:34.091 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:34.090+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v784) ==== 221+0+0 (unknown 631454613 0 0) 0x7feb70067c00 con 0x7feb801086a0 2026-03-09T20:54:34.094 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:34.093+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 <== mon.0 v1:192.168.123.105:6789/0 11 ==== osd_map(784..784 src has 1..784) ==== 628+0+0 (unknown 1214952130 0 0) 0x7feb7005ac20 con 0x7feb801086a0 2026-03-09T20:54:34.094 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:34.093+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=785}) -- 0x7feb50003620 con 0x7feb801086a0 2026-03-09T20:54:34.147 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:34.146+0000 7feb8601a640 1 -- 192.168.123.105:0/2920547349 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"} v 0) -- 0x7feb440028a0 con 0x7feb801086a0 2026-03-09T20:54:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:34 vm05.local ceph-mon[61345]: pgmap v1740: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:54:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:34 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/3761328214' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T20:54:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:34 vm05.local ceph-mon[61345]: osdmap e783: 8 total, 8 up, 8 in 2026-03-09T20:54:34.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:34 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2920547349' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:54:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:34 vm05.local ceph-mon[51870]: pgmap v1740: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:54:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:34 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/3761328214' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T20:54:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:34 vm05.local ceph-mon[51870]: osdmap e783: 8 total, 8 up, 8 in 2026-03-09T20:54:34.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:34 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2920547349' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:54:34.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:34 vm09.local ceph-mon[54524]: pgmap v1740: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:54:34.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:34 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/3761328214' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_bytes", "val": "0"}]': finished 2026-03-09T20:54:34.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:34 vm09.local ceph-mon[54524]: osdmap e783: 8 total, 8 up, 8 in 2026-03-09T20:54:34.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:34 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2920547349' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:54:35.105 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:35.104+0000 7feb7cff9640 1 -- 192.168.123.105:0/2920547349 <== mon.0 v1:192.168.123.105:6789/0 12 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae v785) ==== 221+0+0 (unknown 2784225908 0 0) 0x7feb7005a8c0 con 0x7feb801086a0 2026-03-09T20:54:35.105 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_objects = 0 for pool 27ffa175-ba53-4b7b-afd8-5d830c8341ae 2026-03-09T20:54:35.107 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:35.106+0000 7feb8601a640 1 -- 192.168.123.105:0/2920547349 >> v1:192.168.123.105:6800/1903060503 conn(0x7feb500782c0 legacy=0x7feb5007a780 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:35.107 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:35.106+0000 7feb8601a640 1 -- 192.168.123.105:0/2920547349 >> v1:192.168.123.105:6789/0 conn(0x7feb801086a0 legacy=0x7feb8010e9f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:54:35.108 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:35.107+0000 7feb8601a640 1 -- 192.168.123.105:0/2920547349 shutdown_connections 2026-03-09T20:54:35.108 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:35.107+0000 7feb8601a640 1 -- 192.168.123.105:0/2920547349 >> 192.168.123.105:0/2920547349 conn(0x7feb800fe3b0 msgr2=0x7feb80101560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:54:35.108 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:35.107+0000 7feb8601a640 1 -- 192.168.123.105:0/2920547349 shutdown_connections 2026-03-09T20:54:35.108 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:54:35.107+0000 7feb8601a640 1 -- 192.168.123.105:0/2920547349 wait complete. 2026-03-09T20:54:35.115 INFO:tasks.workunit.client.0.vm05.stderr:+ sleep 30 2026-03-09T20:54:35.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:35 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2920547349' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]': finished 2026-03-09T20:54:35.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:35 vm05.local ceph-mon[61345]: osdmap e784: 8 total, 8 up, 8 in 2026-03-09T20:54:35.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:35 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2920547349' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:54:35.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:35 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2920547349' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]': finished 2026-03-09T20:54:35.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:35 vm05.local ceph-mon[51870]: osdmap e784: 8 total, 8 up, 8 in 2026-03-09T20:54:35.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:35 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2920547349' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:54:35.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:35 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2920547349' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]': finished 2026-03-09T20:54:35.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:35 vm09.local ceph-mon[54524]: osdmap e784: 8 total, 8 up, 8 in 2026-03-09T20:54:35.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:35 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2920547349' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]: dispatch 2026-03-09T20:54:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:36 vm05.local ceph-mon[61345]: pgmap v1743: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:36 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2920547349' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]': finished 2026-03-09T20:54:36.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:36 vm05.local ceph-mon[61345]: osdmap e785: 8 total, 8 up, 8 in 2026-03-09T20:54:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:36 vm05.local ceph-mon[51870]: pgmap v1743: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:36 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2920547349' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]': finished 2026-03-09T20:54:36.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:36 vm05.local ceph-mon[51870]: osdmap e785: 8 total, 8 up, 8 in 2026-03-09T20:54:36.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:36 vm09.local ceph-mon[54524]: pgmap v1743: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:36.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:36 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2920547349' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "field": "max_objects", "val": "0"}]': finished 2026-03-09T20:54:36.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:36 vm09.local ceph-mon[54524]: osdmap e785: 8 total, 8 up, 8 in 2026-03-09T20:54:37.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:54:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:54:38.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:38 vm05.local ceph-mon[61345]: pgmap v1745: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 442 B/s wr, 1 op/s 2026-03-09T20:54:38.410 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:38 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:38.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:38 vm05.local ceph-mon[51870]: pgmap v1745: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 442 B/s wr, 1 op/s 2026-03-09T20:54:38.410 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:38 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:38.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:38 vm09.local ceph-mon[54524]: pgmap v1745: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 442 B/s wr, 1 op/s 2026-03-09T20:54:38.523 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:38 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:38.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:54:38 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:54:38] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:54:39.660 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:39 vm05.local ceph-mon[61345]: pgmap v1746: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 341 B/s wr, 0 op/s 2026-03-09T20:54:39.660 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:39 vm05.local ceph-mon[51870]: pgmap v1746: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 341 B/s wr, 0 op/s 2026-03-09T20:54:39.773 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:39 vm09.local ceph-mon[54524]: pgmap v1746: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 341 B/s wr, 0 op/s 2026-03-09T20:54:42.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:41 vm09.local ceph-mon[54524]: pgmap v1747: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 269 B/s wr, 1 op/s 2026-03-09T20:54:42.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:41 vm05.local ceph-mon[61345]: pgmap v1747: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 269 B/s wr, 1 op/s 2026-03-09T20:54:42.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:41 vm05.local ceph-mon[51870]: pgmap v1747: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 269 B/s wr, 1 op/s 2026-03-09T20:54:44.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:43 vm09.local ceph-mon[54524]: pgmap v1748: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 238 B/s wr, 1 op/s 2026-03-09T20:54:44.159 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:43 vm05.local ceph-mon[61345]: pgmap v1748: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 238 B/s wr, 1 op/s 2026-03-09T20:54:44.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:43 vm05.local ceph-mon[51870]: pgmap v1748: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 238 B/s wr, 1 op/s 2026-03-09T20:54:46.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:45 vm09.local ceph-mon[54524]: pgmap v1749: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T20:54:46.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:45 vm05.local ceph-mon[61345]: pgmap v1749: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T20:54:46.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:45 vm05.local ceph-mon[51870]: pgmap v1749: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-09T20:54:47.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:46 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:47.023 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:54:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:54:47.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:46 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:47.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:46 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:54:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:47 vm09.local ceph-mon[54524]: pgmap v1750: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 176 B/s wr, 1 op/s 2026-03-09T20:54:48.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:47 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:47 vm05.local ceph-mon[61345]: pgmap v1750: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 176 B/s wr, 1 op/s 2026-03-09T20:54:48.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:47 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:47 vm05.local ceph-mon[51870]: pgmap v1750: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 176 B/s wr, 1 op/s 2026-03-09T20:54:48.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:47 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:48.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:54:48 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:54:48] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:54:50.023 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:49 vm09.local ceph-mon[54524]: pgmap v1751: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:50.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:49 vm05.local ceph-mon[61345]: pgmap v1751: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:50.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:49 vm05.local ceph-mon[51870]: pgmap v1751: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:52.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:51 vm05.local ceph-mon[61345]: pgmap v1752: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:52.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:51 vm05.local ceph-mon[51870]: pgmap v1752: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:52.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:51 vm09.local ceph-mon[54524]: pgmap v1752: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:54.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:53 vm05.local ceph-mon[61345]: pgmap v1753: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:54.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:53 vm05.local ceph-mon[51870]: pgmap v1753: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:54.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:53 vm09.local ceph-mon[54524]: pgmap v1753: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:56.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:55 vm05.local ceph-mon[61345]: pgmap v1754: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:56.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:55 vm05.local ceph-mon[51870]: pgmap v1754: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:56.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:55 vm09.local ceph-mon[54524]: pgmap v1754: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:54:57.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:54:56 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:54:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:57 vm05.local ceph-mon[61345]: pgmap v1755: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:58.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:57 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:57 vm05.local ceph-mon[51870]: pgmap v1755: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:58.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:57 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:57 vm09.local ceph-mon[54524]: pgmap v1755: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:54:58.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:57 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:54:58.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:54:58 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:54:58] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:55:00.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:54:59 vm05.local ceph-mon[61345]: pgmap v1756: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:55:00.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:54:59 vm05.local ceph-mon[51870]: pgmap v1756: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:55:00.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:54:59 vm09.local ceph-mon[54524]: pgmap v1756: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:55:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:01 vm05.local ceph-mon[61345]: pgmap v1757: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:55:02.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:01 vm05.local ceph-mon[61345]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:55:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:01 vm05.local ceph-mon[51870]: pgmap v1757: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:55:02.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:01 vm05.local ceph-mon[51870]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:55:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:01 vm09.local ceph-mon[54524]: pgmap v1757: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T20:55:02.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:01 vm09.local ceph-mon[54524]: from='mgr.24602 v1:192.168.123.105:0/3886900736' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:55:04.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:03 vm05.local ceph-mon[61345]: pgmap v1758: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:55:04.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:03 vm05.local ceph-mon[51870]: pgmap v1758: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:55:04.272 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:03 vm09.local ceph-mon[54524]: pgmap v1758: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:55:05.116 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool delete 27ffa175-ba53-4b7b-afd8-5d830c8341ae 27ffa175-ba53-4b7b-afd8-5d830c8341ae --yes-i-really-really-mean-it 2026-03-09T20:55:05.174 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.173+0000 7f6fc080b640 1 -- 192.168.123.105:0/1345561287 >> v1:192.168.123.105:6790/0 conn(0x7f6fb8111370 legacy=0x7f6fb8113810 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.174 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.173+0000 7f6fc080b640 1 -- 192.168.123.105:0/1345561287 shutdown_connections 2026-03-09T20:55:05.174 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.173+0000 7f6fc080b640 1 -- 192.168.123.105:0/1345561287 >> 192.168.123.105:0/1345561287 conn(0x7f6fb81005f0 msgr2=0x7f6fb8102a10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:05.174 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.173+0000 7f6fc080b640 1 -- 192.168.123.105:0/1345561287 shutdown_connections 2026-03-09T20:55:05.174 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.173+0000 7f6fc080b640 1 -- 192.168.123.105:0/1345561287 wait complete. 2026-03-09T20:55:05.174 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.174+0000 7f6fc080b640 1 Processor -- start 2026-03-09T20:55:05.175 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.174+0000 7f6fc080b640 1 -- start start 2026-03-09T20:55:05.175 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.174+0000 7f6fc080b640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6fb8110fc0 con 0x7f6fb810d7a0 2026-03-09T20:55:05.175 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.174+0000 7f6fc080b640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6fb81acda0 con 0x7f6fb8111370 2026-03-09T20:55:05.175 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.174+0000 7f6fc080b640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6fb81adf80 con 0x7f6fb810a900 2026-03-09T20:55:05.175 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.174+0000 7f6fbed81640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f6fb8111370 0x7f6fb81aa670 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:37332/0 (socket says 192.168.123.105:37332) 2026-03-09T20:55:05.175 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.174+0000 7f6fbed81640 1 -- 192.168.123.105:0/2664480364 learned_addr learned my addr 192.168.123.105:0/2664480364 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:55:05.175 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.174+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3878585598 0 0) 0x7f6fb81acda0 con 0x7f6fb8111370 2026-03-09T20:55:05.175 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.174+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6f8c003620 con 0x7f6fb8111370 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2701022482 0 0) 0x7f6fb8110fc0 con 0x7f6fb810d7a0 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6fb81acda0 con 0x7f6fb810d7a0 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 590716861 0 0) 0x7f6f8c003620 con 0x7f6fb8111370 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6fb8110fc0 con 0x7f6fb8111370 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6fac002f40 con 0x7f6fb8111370 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 358792732 0 0) 0x7f6fb81acda0 con 0x7f6fb810d7a0 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6f8c003620 con 0x7f6fb810d7a0 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6fb00030c0 con 0x7f6fb810d7a0 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 548069552 0 0) 0x7f6fb8110fc0 con 0x7f6fb8111370 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 >> v1:192.168.123.105:6790/0 conn(0x7f6fb810a900 legacy=0x7f6fb810de80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 >> v1:192.168.123.105:6789/0 conn(0x7f6fb810d7a0 legacy=0x7f6fb810e590 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6fb81af160 con 0x7f6fb8111370 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f6fac003c60 con 0x7f6fb8111370 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f6fac0051b0 con 0x7f6fb8111370 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fc080b640 1 -- 192.168.123.105:0/2664480364 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f6fb81abd90 con 0x7f6fb8111370 2026-03-09T20:55:05.176 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.175+0000 7f6fc080b640 1 -- 192.168.123.105:0/2664480364 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f6fb81ac2f0 con 0x7f6fb8111370 2026-03-09T20:55:05.177 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.176+0000 7f6fc080b640 1 -- 192.168.123.105:0/2664480364 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6f88005180 con 0x7f6fb8111370 2026-03-09T20:55:05.178 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.177+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7f6fac003390 con 0x7f6fb8111370 2026-03-09T20:55:05.178 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.177+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(785..785 src has 1..785) ==== 8182+0+0 (unknown 2609020697 0 0) 0x7f6fac095d00 con 0x7f6fb8111370 2026-03-09T20:55:05.178 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.177+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=786}) -- 0x7f6fb8110fc0 con 0x7f6fb8111370 2026-03-09T20:55:05.180 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.179+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f6fac0621b0 con 0x7f6fb8111370 2026-03-09T20:55:05.275 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.275+0000 7f6fc080b640 1 -- 192.168.123.105:0/2664480364 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true} v 0) -- 0x7f6f88005470 con 0x7f6fb8111370 2026-03-09T20:55:05.824 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.823+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.1 v1:192.168.123.109:6789/0 10 ==== osd_map(786..786 src has 1..786) ==== 296+0+0 (unknown 2084056899 0 0) 0x7f6fac05a110 con 0x7f6fb8111370 2026-03-09T20:55:05.824 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.823+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=787}) -- 0x7f6fb81acda0 con 0x7f6fb8111370 2026-03-09T20:55:05.831 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.830+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.1 v1:192.168.123.109:6789/0 11 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]=0 pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' removed v786) ==== 248+0+0 (unknown 773360732 0 0) 0x7f6fac0670f0 con 0x7f6fb8111370 2026-03-09T20:55:05.890 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.889+0000 7f6fc080b640 1 -- 192.168.123.105:0/2664480364 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true} v 0) -- 0x7f6f880020e0 con 0x7f6fb8111370 2026-03-09T20:55:05.891 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.890+0000 7f6fa37fe640 1 -- 192.168.123.105:0/2664480364 <== mon.1 v1:192.168.123.109:6789/0 12 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]=0 pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' does not exist v786) ==== 255+0+0 (unknown 694878739 0 0) 0x7f6fac003730 con 0x7f6fb8111370 2026-03-09T20:55:05.891 INFO:tasks.workunit.client.0.vm05.stderr:pool '27ffa175-ba53-4b7b-afd8-5d830c8341ae' does not exist 2026-03-09T20:55:05.893 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.892+0000 7f6fc080b640 1 -- 192.168.123.105:0/2664480364 >> v1:192.168.123.105:6800/1903060503 conn(0x7f6f8c078370 legacy=0x7f6f8c07a830 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.893 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.892+0000 7f6fc080b640 1 -- 192.168.123.105:0/2664480364 >> v1:192.168.123.109:6789/0 conn(0x7f6fb8111370 legacy=0x7f6fb81aa670 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.893 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.892+0000 7f6fc080b640 1 -- 192.168.123.105:0/2664480364 shutdown_connections 2026-03-09T20:55:05.893 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.892+0000 7f6fc080b640 1 -- 192.168.123.105:0/2664480364 >> 192.168.123.105:0/2664480364 conn(0x7f6fb81005f0 msgr2=0x7f6fb81039e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:05.893 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.892+0000 7f6fc080b640 1 -- 192.168.123.105:0/2664480364 shutdown_connections 2026-03-09T20:55:05.893 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.892+0000 7f6fc080b640 1 -- 192.168.123.105:0/2664480364 wait complete. 2026-03-09T20:55:05.900 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool delete 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 7e4fbacd-bf45-40ed-8505-6e93c7ca9219 --yes-i-really-really-mean-it 2026-03-09T20:55:05.953 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.952+0000 7f36265bc640 1 -- 192.168.123.105:0/174491582 >> v1:192.168.123.109:6789/0 conn(0x7f362010a900 legacy=0x7f362010ace0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.953 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.952+0000 7f36265bc640 1 -- 192.168.123.105:0/174491582 shutdown_connections 2026-03-09T20:55:05.953 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.952+0000 7f36265bc640 1 -- 192.168.123.105:0/174491582 >> 192.168.123.105:0/174491582 conn(0x7f36201005f0 msgr2=0x7f3620102a10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:05.953 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.952+0000 7f36265bc640 1 -- 192.168.123.105:0/174491582 shutdown_connections 2026-03-09T20:55:05.953 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.952+0000 7f36265bc640 1 -- 192.168.123.105:0/174491582 wait complete. 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.953+0000 7f36265bc640 1 Processor -- start 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.953+0000 7f36265bc640 1 -- start start 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.953+0000 7f36265bc640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f36201aba40 con 0x7f362010d7a0 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.953+0000 7f36265bc640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f36201acc20 con 0x7f362010a900 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.953+0000 7f36265bc640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f36201ade00 con 0x7f3620111370 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.953+0000 7f361ffff640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f362010a900 0x7f36201a0ce0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:37350/0 (socket says 192.168.123.105:37350) 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.953+0000 7f361ffff640 1 -- 192.168.123.105:0/1130049753 learned_addr learned my addr 192.168.123.105:0/1130049753 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.953+0000 7f3624b32640 1 --1- 192.168.123.105:0/1130049753 >> v1:192.168.123.105:6790/0 conn(0x7f3620111370 0x7f36201aa320 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:57728/0 (socket says 192.168.123.105:57728) 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.953+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1135066091 0 0) 0x7f36201acc20 con 0x7f362010a900 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.953+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f35f4003620 con 0x7f362010a900 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.953+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4128599097 0 0) 0x7f36201aba40 con 0x7f362010d7a0 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.953+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f36201acc20 con 0x7f362010d7a0 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.954+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3743618856 0 0) 0x7f35f4003620 con 0x7f362010a900 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.954+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f36201aba40 con 0x7f362010a900 2026-03-09T20:55:05.954 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.954+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f3604003200 con 0x7f362010a900 2026-03-09T20:55:05.955 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.954+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3240053438 0 0) 0x7f36201aba40 con 0x7f362010a900 2026-03-09T20:55:05.955 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.954+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 >> v1:192.168.123.105:6790/0 conn(0x7f3620111370 legacy=0x7f36201aa320 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.955 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.954+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 >> v1:192.168.123.105:6789/0 conn(0x7f362010d7a0 legacy=0x7f36201a6bf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:05.955 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.954+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f36201aefe0 con 0x7f362010a900 2026-03-09T20:55:05.955 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.954+0000 7f36265bc640 1 -- 192.168.123.105:0/1130049753 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f36201abc10 con 0x7f362010a900 2026-03-09T20:55:05.955 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.954+0000 7f36265bc640 1 -- 192.168.123.105:0/1130049753 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f36201ac1d0 con 0x7f362010a900 2026-03-09T20:55:05.956 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.955+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f3604004720 con 0x7f362010a900 2026-03-09T20:55:05.956 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.955+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 304876327 0 0) 0x7f3604004e40 con 0x7f362010a900 2026-03-09T20:55:05.956 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.955+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 1243104359 0 0) 0x7f36040050c0 con 0x7f362010a900 2026-03-09T20:55:05.957 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.955+0000 7f36265bc640 1 -- 192.168.123.105:0/1130049753 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f35e4005180 con 0x7f362010a900 2026-03-09T20:55:05.957 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.956+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(786..786 src has 1..786) ==== 7794+0+0 (unknown 398349998 0 0) 0x7f3604095980 con 0x7f362010a900 2026-03-09T20:55:05.957 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.956+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=787}) -- 0x7f36201aba40 con 0x7f362010a900 2026-03-09T20:55:05.959 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:05.958+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f3604062d40 con 0x7f362010a900 2026-03-09T20:55:06.051 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:06.050+0000 7f36265bc640 1 -- 192.168.123.105:0/1130049753 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true} v 0) -- 0x7f35e4005470 con 0x7f362010a900 2026-03-09T20:55:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:05 vm05.local ceph-mon[61345]: pgmap v1759: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:55:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:05 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2664480364' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:06.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:05 vm05.local ceph-mon[61345]: from='client.50638 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:05 vm05.local ceph-mon[51870]: pgmap v1759: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:55:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:05 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2664480364' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:06.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:05 vm05.local ceph-mon[51870]: from='client.50638 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:05 vm09.local ceph-mon[54524]: pgmap v1759: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T20:55:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:05 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2664480364' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:06.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:05 vm09.local ceph-mon[54524]: from='client.50638 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:06.844 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:06.843+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 <== mon.1 v1:192.168.123.109:6789/0 10 ==== osd_map(787..787 src has 1..787) ==== 296+0+0 (unknown 3925730417 0 0) 0x7f360405aca0 con 0x7f362010a900 2026-03-09T20:55:06.845 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:06.844+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=788}) -- 0x7f36201acc20 con 0x7f362010a900 2026-03-09T20:55:06.847 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:06.846+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 <== mon.1 v1:192.168.123.109:6789/0 11 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]=0 pool '7e4fbacd-bf45-40ed-8505-6e93c7ca9219' removed v787) ==== 248+0+0 (unknown 1627157716 0 0) 0x7f3604067c80 con 0x7f362010a900 2026-03-09T20:55:06.905 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:06.904+0000 7f36265bc640 1 -- 192.168.123.105:0/1130049753 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true} v 0) -- 0x7f35e40028a0 con 0x7f362010a900 2026-03-09T20:55:06.906 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:06.905+0000 7f361d7fa640 1 -- 192.168.123.105:0/1130049753 <== mon.1 v1:192.168.123.109:6789/0 12 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]=0 pool '7e4fbacd-bf45-40ed-8505-6e93c7ca9219' does not exist v787) ==== 255+0+0 (unknown 1642354732 0 0) 0x7f3604093730 con 0x7f362010a900 2026-03-09T20:55:06.906 INFO:tasks.workunit.client.0.vm05.stderr:pool '7e4fbacd-bf45-40ed-8505-6e93c7ca9219' does not exist 2026-03-09T20:55:06.908 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:06.908+0000 7f36265bc640 1 -- 192.168.123.105:0/1130049753 >> v1:192.168.123.105:6800/1903060503 conn(0x7f35f4077f40 legacy=0x7f35f407a400 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:06.909 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:06.908+0000 7f36265bc640 1 -- 192.168.123.105:0/1130049753 >> v1:192.168.123.109:6789/0 conn(0x7f362010a900 legacy=0x7f36201a0ce0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T20:55:06.909 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:06.908+0000 7f36265bc640 1 -- 192.168.123.105:0/1130049753 shutdown_connections 2026-03-09T20:55:06.909 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:06.908+0000 7f36265bc640 1 -- 192.168.123.105:0/1130049753 >> 192.168.123.105:0/1130049753 conn(0x7f36201005f0 msgr2=0x7f36201039e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T20:55:06.909 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:06.908+0000 7f36265bc640 1 -- 192.168.123.105:0/1130049753 shutdown_connections 2026-03-09T20:55:06.909 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-09T20:55:06.908+0000 7f36265bc640 1 -- 192.168.123.105:0/1130049753 wait complete. 2026-03-09T20:55:06.917 INFO:tasks.workunit.client.0.vm05.stdout:OK 2026-03-09T20:55:06.917 INFO:tasks.workunit.client.0.vm05.stderr:+ echo OK 2026-03-09T20:55:06.917 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T20:55:06.917 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-09T20:55:06.988 INFO:tasks.workunit:Stopping ['rados/test.sh', 'rados/test_pool_quota.sh'] on client.0... 2026-03-09T20:55:06.988 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-09T20:55:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:06 vm05.local ceph-mon[51870]: from='client.50638 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T20:55:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:06 vm05.local ceph-mon[51870]: osdmap e786: 8 total, 8 up, 8 in 2026-03-09T20:55:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:06 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/2664480364' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:06 vm05.local ceph-mon[51870]: from='client.50638 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:06 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1130049753' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:07.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:06 vm05.local ceph-mon[51870]: from='client.50644 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:06 vm05.local ceph-mon[61345]: from='client.50638 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T20:55:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:06 vm05.local ceph-mon[61345]: osdmap e786: 8 total, 8 up, 8 in 2026-03-09T20:55:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:06 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/2664480364' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:06 vm05.local ceph-mon[61345]: from='client.50638 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:06 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1130049753' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:07.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:06 vm05.local ceph-mon[61345]: from='client.50644 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:07.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:06 vm09.local ceph-mon[54524]: from='client.50638 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T20:55:07.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:06 vm09.local ceph-mon[54524]: osdmap e786: 8 total, 8 up, 8 in 2026-03-09T20:55:07.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:06 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/2664480364' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:07.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:06 vm09.local ceph-mon[54524]: from='client.50638 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "pool2": "27ffa175-ba53-4b7b-afd8-5d830c8341ae", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:07.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:06 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1130049753' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:07.273 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:06 vm09.local ceph-mon[54524]: from='client.50644 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:07.273 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:55:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug there is no tcmu-runner data available 2026-03-09T20:55:07.410 DEBUG:teuthology.parallel:result is None 2026-03-09T20:55:07.410 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T20:55:07.434 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T20:55:07.434 DEBUG:teuthology.orchestra.run.vm05:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T20:55:07.492 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T20:55:07.492 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T20:55:07.494 INFO:tasks.cephadm:Teardown begin 2026-03-09T20:55:07.494 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:55:07.556 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:55:07.585 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T20:55:07.585 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea -- ceph mgr module disable cephadm 2026-03-09T20:55:07.769 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/mon.a/config 2026-03-09T20:55:07.789 INFO:teuthology.orchestra.run.vm05.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-09T20:55:07.811 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-09T20:55:07.811 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T20:55:07.811 DEBUG:teuthology.orchestra.run.vm05:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T20:55:07.826 DEBUG:teuthology.orchestra.run.vm09:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T20:55:07.840 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T20:55:07.840 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-09T20:55:07.841 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.a 2026-03-09T20:55:07.903 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:07 vm09.local ceph-mon[54524]: pgmap v1761: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:55:07.903 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:07 vm09.local ceph-mon[54524]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T20:55:07.903 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:07 vm09.local ceph-mon[54524]: from='client.50644 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T20:55:07.903 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:07 vm09.local ceph-mon[54524]: osdmap e787: 8 total, 8 up, 8 in 2026-03-09T20:55:07.903 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:07 vm09.local ceph-mon[54524]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:55:07.903 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:07 vm09.local ceph-mon[54524]: from='client.? v1:192.168.123.105:0/1130049753' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:07.903 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:07 vm09.local ceph-mon[54524]: from='client.50644 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:08.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[61345]: pgmap v1761: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:55:08.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[61345]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T20:55:08.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[61345]: from='client.50644 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T20:55:08.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[61345]: osdmap e787: 8 total, 8 up, 8 in 2026-03-09T20:55:08.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[61345]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:55:08.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[61345]: from='client.? v1:192.168.123.105:0/1130049753' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:08.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[61345]: from='client.50644 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:08.122 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[51870]: pgmap v1761: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T20:55:08.122 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[51870]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-09T20:55:08.122 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[51870]: from='client.50644 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T20:55:08.122 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[51870]: osdmap e787: 8 total, 8 up, 8 in 2026-03-09T20:55:08.122 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[51870]: from='client.14610 v1:192.168.123.109:0/2166267233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T20:55:08.122 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[51870]: from='client.? v1:192.168.123.105:0/1130049753' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:08.122 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-mon[51870]: from='client.50644 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "pool2": "7e4fbacd-bf45-40ed-8505-6e93c7ca9219", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T20:55:08.122 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:07 vm05.local systemd[1]: Stopping Ceph mon.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:08.122 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-a[51846]: 2026-03-09T20:55:07.986+0000 7f7ca3104640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:55:08.122 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 09 20:55:07 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-a[51846]: 2026-03-09T20:55:07.986+0000 7f7ca3104640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-09T20:55:08.329 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.a.service' 2026-03-09T20:55:08.382 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:55:08.382 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-09T20:55:08.382 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-09T20:55:08.382 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.c 2026-03-09T20:55:08.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:08 vm05.local systemd[1]: Stopping Ceph mon.c for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:08.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-c[61320]: 2026-03-09T20:55:08.548+0000 7fb735fc8640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:55:08.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-c[61320]: 2026-03-09T20:55:08.548+0000 7fb735fc8640 -1 mon.c@2(peon) e3 *** Got Signal Terminated *** 2026-03-09T20:55:08.634 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 20:55:08 vm05.local podman[170274]: 2026-03-09 20:55:08.616956781 +0000 UTC m=+0.097292719 container died acf150ca4348c3c2159aeeeab35b2fb50f3582820bea8096a350877217b89a63 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-c, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True) 2026-03-09T20:55:08.794 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.c.service' 2026-03-09T20:55:08.823 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:55:08.823 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-09T20:55:08.823 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-09T20:55:08.823 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.b 2026-03-09T20:55:08.910 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:55:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y[52101]: ::ffff:192.168.123.109 - - [09/Mar/2026:20:55:08] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T20:55:09.190 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:08 vm09.local systemd[1]: Stopping Ceph mon.b for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:09.190 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:08 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-b[54481]: 2026-03-09T20:55:08.934+0000 7f58ac73d640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:55:09.190 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 20:55:08 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mon-b[54481]: 2026-03-09T20:55:08.934+0000 7f58ac73d640 -1 mon.b@1(peon) e3 *** Got Signal Terminated *** 2026-03-09T20:55:09.370 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mon.b.service' 2026-03-09T20:55:09.406 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:55:09.406 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-09T20:55:09.406 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-09T20:55:09.406 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mgr.y 2026-03-09T20:55:09.693 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:55:09 vm05.local systemd[1]: Stopping Ceph mgr.y for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:09.693 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 09 20:55:09 vm05.local podman[170393]: 2026-03-09 20:55:09.564767786 +0000 UTC m=+0.071232171 container died 1bd191aee88b5b4aa77a9e7745a5ba90eb16568ed1ff9b0d7f79a867b441bdb8 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-y, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default) 2026-03-09T20:55:09.751 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mgr.y.service' 2026-03-09T20:55:09.786 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:55:09.786 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-09T20:55:09.786 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-09T20:55:09.786 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mgr.x 2026-03-09T20:55:10.097 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:55:09 vm09.local systemd[1]: Stopping Ceph mgr.x for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:10.097 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:55:09 vm09.local podman[91132]: 2026-03-09 20:55:09.91785105 +0000 UTC m=+0.047921571 container died 938157644bb99287804d88cc0537eba009c773bcc64bb5e7f93724b1b5fc1e10 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS) 2026-03-09T20:55:10.097 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:55:10 vm09.local podman[91132]: 2026-03-09 20:55:10.045854378 +0000 UTC m=+0.175924899 container remove 938157644bb99287804d88cc0537eba009c773bcc64bb5e7f93724b1b5fc1e10 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T20:55:10.097 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:55:10 vm09.local bash[91132]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-mgr-x 2026-03-09T20:55:10.097 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 09 20:55:10 vm09.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mgr.x.service: Main process exited, code=exited, status=143/n/a 2026-03-09T20:55:10.108 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@mgr.x.service' 2026-03-09T20:55:10.140 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:55:10.140 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-09T20:55:10.140 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T20:55:10.140 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.0 2026-03-09T20:55:10.660 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:10 vm05.local systemd[1]: Stopping Ceph osd.0 for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:10.660 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:10 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0[65089]: 2026-03-09T20:55:10.243+0000 7f0ab5d5d640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:55:10.660 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:10 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0[65089]: 2026-03-09T20:55:10.243+0000 7f0ab5d5d640 -1 osd.0 787 *** Got signal Terminated *** 2026-03-09T20:55:10.660 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:10 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0[65089]: 2026-03-09T20:55:10.243+0000 7f0ab5d5d640 -1 osd.0 787 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:55:15.568 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:15 vm05.local podman[170508]: 2026-03-09 20:55:15.29173616 +0000 UTC m=+5.065222117 container died e62dc9628eed398a66aba86537d9b3cecabff871b318a75e51adef946a23bba6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2) 2026-03-09T20:55:15.568 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:15 vm05.local podman[170508]: 2026-03-09 20:55:15.416236314 +0000 UTC m=+5.189722271 container remove e62dc9628eed398a66aba86537d9b3cecabff871b318a75e51adef946a23bba6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.build-date=20260223) 2026-03-09T20:55:15.568 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:15 vm05.local bash[170508]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0 2026-03-09T20:55:15.869 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:15 vm05.local podman[170584]: 2026-03-09 20:55:15.568271106 +0000 UTC m=+0.018078308 container create e0879644cef82ed92557a4dd53d8bddb2a69a29b9ec99a29804f6a63fbde308f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0-deactivate, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T20:55:15.869 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:15 vm05.local podman[170584]: 2026-03-09 20:55:15.613753799 +0000 UTC m=+0.063560992 container init e0879644cef82ed92557a4dd53d8bddb2a69a29b9ec99a29804f6a63fbde308f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0-deactivate, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223) 2026-03-09T20:55:15.869 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:15 vm05.local podman[170584]: 2026-03-09 20:55:15.619170418 +0000 UTC m=+0.068977620 container start e0879644cef82ed92557a4dd53d8bddb2a69a29b9ec99a29804f6a63fbde308f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2) 2026-03-09T20:55:15.869 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:15 vm05.local podman[170584]: 2026-03-09 20:55:15.620459371 +0000 UTC m=+0.070266573 container attach e0879644cef82ed92557a4dd53d8bddb2a69a29b9ec99a29804f6a63fbde308f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0-deactivate, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T20:55:15.869 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:15 vm05.local podman[170584]: 2026-03-09 20:55:15.561459417 +0000 UTC m=+0.011266629 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:55:15.869 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:15 vm05.local conmon[170595]: conmon e0879644cef82ed92557 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0879644cef82ed92557a4dd53d8bddb2a69a29b9ec99a29804f6a63fbde308f.scope/memory.events 2026-03-09T20:55:15.869 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 20:55:15 vm05.local podman[170584]: 2026-03-09 20:55:15.754154927 +0000 UTC m=+0.203962129 container died e0879644cef82ed92557a4dd53d8bddb2a69a29b9ec99a29804f6a63fbde308f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-0-deactivate, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:55:15.897 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.0.service' 2026-03-09T20:55:15.930 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:55:15.930 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T20:55:15.930 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T20:55:15.930 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.1 2026-03-09T20:55:16.160 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:16 vm05.local systemd[1]: Stopping Ceph osd.1 for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:16.160 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:16 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1[70325]: 2026-03-09T20:55:16.072+0000 7f091a317640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:55:16.160 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:16 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1[70325]: 2026-03-09T20:55:16.072+0000 7f091a317640 -1 osd.1 787 *** Got signal Terminated *** 2026-03-09T20:55:16.160 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:16 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1[70325]: 2026-03-09T20:55:16.072+0000 7f091a317640 -1 osd.1 787 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:55:21.389 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:21 vm05.local podman[170706]: 2026-03-09 20:55:21.119081537 +0000 UTC m=+5.060588413 container died da5a6c139a9fa0951572277b786fc24ed905740af6cacbb081ea4a44b978ebb4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default) 2026-03-09T20:55:21.389 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:21 vm05.local podman[170706]: 2026-03-09 20:55:21.245300758 +0000 UTC m=+5.186807634 container remove da5a6c139a9fa0951572277b786fc24ed905740af6cacbb081ea4a44b978ebb4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T20:55:21.389 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:21 vm05.local bash[170706]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1 2026-03-09T20:55:21.389 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:21 vm05.local podman[170781]: 2026-03-09 20:55:21.364163411 +0000 UTC m=+0.014326086 container create b9173690f037c2f54fb90e3f1c746302f53e836699696dddb98e546229b43220 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:55:21.653 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:21 vm05.local podman[170781]: 2026-03-09 20:55:21.405135771 +0000 UTC m=+0.055298457 container init b9173690f037c2f54fb90e3f1c746302f53e836699696dddb98e546229b43220 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1-deactivate, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) 2026-03-09T20:55:21.653 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:21 vm05.local podman[170781]: 2026-03-09 20:55:21.414548029 +0000 UTC m=+0.064710704 container start b9173690f037c2f54fb90e3f1c746302f53e836699696dddb98e546229b43220 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1-deactivate, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_REF=squid) 2026-03-09T20:55:21.653 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:21 vm05.local podman[170781]: 2026-03-09 20:55:21.415643439 +0000 UTC m=+0.065806114 container attach b9173690f037c2f54fb90e3f1c746302f53e836699696dddb98e546229b43220 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1-deactivate, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T20:55:21.653 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:21 vm05.local podman[170781]: 2026-03-09 20:55:21.358321386 +0000 UTC m=+0.008484061 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:55:21.654 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:21 vm05.local conmon[170791]: conmon b9173690f037c2f54fb9 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9173690f037c2f54fb90e3f1c746302f53e836699696dddb98e546229b43220.scope/memory.events 2026-03-09T20:55:21.654 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 20:55:21 vm05.local podman[170781]: 2026-03-09 20:55:21.540315946 +0000 UTC m=+0.190478621 container died b9173690f037c2f54fb90e3f1c746302f53e836699696dddb98e546229b43220 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-1-deactivate, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T20:55:21.673 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.1.service' 2026-03-09T20:55:21.705 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:55:21.705 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T20:55:21.705 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-09T20:55:21.705 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.2 2026-03-09T20:55:21.910 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:55:21 vm05.local systemd[1]: Stopping Ceph osd.2 for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:21.910 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:55:21 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2[75948]: 2026-03-09T20:55:21.844+0000 7f25bd53a640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:55:21.910 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:55:21 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2[75948]: 2026-03-09T20:55:21.844+0000 7f25bd53a640 -1 osd.2 787 *** Got signal Terminated *** 2026-03-09T20:55:21.910 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:55:21 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2[75948]: 2026-03-09T20:55:21.844+0000 7f25bd53a640 -1 osd.2 787 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:55:27.158 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:55:26 vm05.local podman[170901]: 2026-03-09 20:55:26.884507491 +0000 UTC m=+5.054529116 container died 294e4e666700ce3bfd1bf910e99e2b3d8488438162a990d7a95a7371359b1750 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T20:55:27.158 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:55:27 vm05.local podman[170901]: 2026-03-09 20:55:27.022291417 +0000 UTC m=+5.192313024 container remove 294e4e666700ce3bfd1bf910e99e2b3d8488438162a990d7a95a7371359b1750 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223) 2026-03-09T20:55:27.158 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:55:27 vm05.local bash[170901]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2 2026-03-09T20:55:27.410 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:55:27 vm05.local podman[170980]: 2026-03-09 20:55:27.15817581 +0000 UTC m=+0.016898461 container create c4fdde8ccad7f34ee01275624cc74ab51994e4cd949208162f13f00ecdb0e026 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2-deactivate, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T20:55:27.410 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:55:27 vm05.local podman[170980]: 2026-03-09 20:55:27.196411578 +0000 UTC m=+0.055134239 container init c4fdde8ccad7f34ee01275624cc74ab51994e4cd949208162f13f00ecdb0e026 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2-deactivate, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T20:55:27.410 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:55:27 vm05.local podman[170980]: 2026-03-09 20:55:27.200502064 +0000 UTC m=+0.059224725 container start c4fdde8ccad7f34ee01275624cc74ab51994e4cd949208162f13f00ecdb0e026 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20260223, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T20:55:27.410 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:55:27 vm05.local podman[170980]: 2026-03-09 20:55:27.203529581 +0000 UTC m=+0.062252252 container attach c4fdde8ccad7f34ee01275624cc74ab51994e4cd949208162f13f00ecdb0e026 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2-deactivate, CEPH_REF=squid, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default) 2026-03-09T20:55:27.410 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:55:27 vm05.local podman[170980]: 2026-03-09 20:55:27.151878122 +0000 UTC m=+0.010600783 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:55:27.410 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 20:55:27 vm05.local podman[170980]: 2026-03-09 20:55:27.328690582 +0000 UTC m=+0.187413233 container died c4fdde8ccad7f34ee01275624cc74ab51994e4cd949208162f13f00ecdb0e026 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-2-deactivate, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:55:27.457 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.2.service' 2026-03-09T20:55:27.490 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:55:27.490 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-09T20:55:27.490 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-09T20:55:27.490 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.3 2026-03-09T20:55:27.910 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:55:27 vm05.local systemd[1]: Stopping Ceph osd.3 for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:27.910 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:55:27 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3[81622]: 2026-03-09T20:55:27.622+0000 7f15ffd29640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:55:27.910 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:55:27 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3[81622]: 2026-03-09T20:55:27.622+0000 7f15ffd29640 -1 osd.3 787 *** Got signal Terminated *** 2026-03-09T20:55:27.910 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:55:27 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3[81622]: 2026-03-09T20:55:27.622+0000 7f15ffd29640 -1 osd.3 787 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:55:32.937 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:55:32 vm05.local podman[171101]: 2026-03-09 20:55:32.66765632 +0000 UTC m=+5.059529484 container died 1e3d8cf33096f2d20ac3bf25fdac5c1a834a49e6f4c0c35c02f2a0671aaad389 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True) 2026-03-09T20:55:32.937 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:55:32 vm05.local podman[171101]: 2026-03-09 20:55:32.799113134 +0000 UTC m=+5.190986298 container remove 1e3d8cf33096f2d20ac3bf25fdac5c1a834a49e6f4c0c35c02f2a0671aaad389 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.schema-version=1.0) 2026-03-09T20:55:32.937 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:55:32 vm05.local bash[171101]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3 2026-03-09T20:55:33.226 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:55:32 vm05.local podman[171176]: 2026-03-09 20:55:32.937217291 +0000 UTC m=+0.017908260 container create 2de38bbb05ce62df7c5a94c9d47c9c89d573e6f50b37a3b9d3a0181b459d53c4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3-deactivate, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T20:55:33.226 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:55:32 vm05.local podman[171176]: 2026-03-09 20:55:32.980373138 +0000 UTC m=+0.061064107 container init 2de38bbb05ce62df7c5a94c9d47c9c89d573e6f50b37a3b9d3a0181b459d53c4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3-deactivate, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T20:55:33.226 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:55:32 vm05.local podman[171176]: 2026-03-09 20:55:32.985498582 +0000 UTC m=+0.066189551 container start 2de38bbb05ce62df7c5a94c9d47c9c89d573e6f50b37a3b9d3a0181b459d53c4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3-deactivate, ceph=True, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T20:55:33.226 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:55:32 vm05.local podman[171176]: 2026-03-09 20:55:32.986810117 +0000 UTC m=+0.067501086 container attach 2de38bbb05ce62df7c5a94c9d47c9c89d573e6f50b37a3b9d3a0181b459d53c4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3-deactivate, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3) 2026-03-09T20:55:33.226 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:55:33 vm05.local podman[171176]: 2026-03-09 20:55:32.930245952 +0000 UTC m=+0.010936932 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:55:33.226 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 09 20:55:33 vm05.local podman[171176]: 2026-03-09 20:55:33.112372759 +0000 UTC m=+0.193063728 container died 2de38bbb05ce62df7c5a94c9d47c9c89d573e6f50b37a3b9d3a0181b459d53c4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-3-deactivate, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2) 2026-03-09T20:55:33.242 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.3.service' 2026-03-09T20:55:33.276 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:55:33.277 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-09T20:55:33.277 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-09T20:55:33.277 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.4 2026-03-09T20:55:33.773 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:55:33 vm09.local systemd[1]: Stopping Ceph osd.4 for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:33.773 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:55:33 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4[58888]: 2026-03-09T20:55:33.391+0000 7fb84aa83640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:55:33.773 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:55:33 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4[58888]: 2026-03-09T20:55:33.391+0000 7fb84aa83640 -1 osd.4 787 *** Got signal Terminated *** 2026-03-09T20:55:33.773 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:55:33 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4[58888]: 2026-03-09T20:55:33.391+0000 7fb84aa83640 -1 osd.4 787 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:55:36.273 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:35 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:35.868+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:36.273 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:36.091+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:37.273 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:36 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:36.869+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:37.273 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:37 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:37.063+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:37.907 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:37 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:55:37.650+0000 7fdaabd22640 -1 osd.5 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:12.105555+0000 front 2026-03-09T20:55:12.105683+0000 (oldest deadline 2026-03-09T20:55:37.405178+0000) 2026-03-09T20:55:38.273 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:37 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:37.906+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:38.273 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:38 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:38.016+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:38.700 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:55:38 vm09.local podman[91259]: 2026-03-09 20:55:38.431457402 +0000 UTC m=+5.056378273 container died 74866824ee2c9ac529d0379a80eb953caba4070f8f6e9570e58a759dc25d95cf (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T20:55:38.700 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:55:38 vm09.local podman[91259]: 2026-03-09 20:55:38.566150774 +0000 UTC m=+5.191071645 container remove 74866824ee2c9ac529d0379a80eb953caba4070f8f6e9570e58a759dc25d95cf (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, ceph=True) 2026-03-09T20:55:38.700 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:55:38 vm09.local bash[91259]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4 2026-03-09T20:55:38.700 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:38 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:55:38.657+0000 7fdaabd22640 -1 osd.5 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:12.105555+0000 front 2026-03-09T20:55:12.105683+0000 (oldest deadline 2026-03-09T20:55:37.405178+0000) 2026-03-09T20:55:39.002 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:55:38 vm09.local podman[91335]: 2026-03-09 20:55:38.700853802 +0000 UTC m=+0.016455863 container create d8a77e4b4ce27e23bd0f890443adaf641c2152b143b4a15696e28e888f6e6517 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4-deactivate, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:55:39.002 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:55:38 vm09.local podman[91335]: 2026-03-09 20:55:38.738123254 +0000 UTC m=+0.053725315 container init d8a77e4b4ce27e23bd0f890443adaf641c2152b143b4a15696e28e888f6e6517 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3) 2026-03-09T20:55:39.002 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:55:38 vm09.local podman[91335]: 2026-03-09 20:55:38.7425393 +0000 UTC m=+0.058141362 container start d8a77e4b4ce27e23bd0f890443adaf641c2152b143b4a15696e28e888f6e6517 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4-deactivate, org.label-schema.build-date=20260223, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2) 2026-03-09T20:55:39.002 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:55:38 vm09.local podman[91335]: 2026-03-09 20:55:38.743972624 +0000 UTC m=+0.059574685 container attach d8a77e4b4ce27e23bd0f890443adaf641c2152b143b4a15696e28e888f6e6517 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T20:55:39.002 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:55:38 vm09.local podman[91335]: 2026-03-09 20:55:38.694659748 +0000 UTC m=+0.010261820 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:55:39.002 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 20:55:38 vm09.local podman[91375]: 2026-03-09 20:55:38.890167242 +0000 UTC m=+0.009881758 container died d8a77e4b4ce27e23bd0f890443adaf641c2152b143b4a15696e28e888f6e6517 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-4-deactivate, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223) 2026-03-09T20:55:39.002 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:38 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:38.865+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:39.020 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.4.service' 2026-03-09T20:55:39.051 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:55:39.051 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-09T20:55:39.051 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-09T20:55:39.052 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.5 2026-03-09T20:55:39.273 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:39 vm09.local systemd[1]: Stopping Ceph osd.5 for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:39.273 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:39 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:55:39.196+0000 7fdaaff0a640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:55:39.273 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:39 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:55:39.196+0000 7fdaaff0a640 -1 osd.5 787 *** Got signal Terminated *** 2026-03-09T20:55:39.273 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:39 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:55:39.196+0000 7fdaaff0a640 -1 osd.5 787 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:55:39.273 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:39 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:39.009+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:39.985 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:39 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:39.816+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:39.985 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:39 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:55:39.648+0000 7fdaabd22640 -1 osd.5 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:12.105555+0000 front 2026-03-09T20:55:12.105683+0000 (oldest deadline 2026-03-09T20:55:37.405178+0000) 2026-03-09T20:55:40.273 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:39 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:39.985+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:40.966 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:40 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:55:40.624+0000 7fdaabd22640 -1 osd.5 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:12.105555+0000 front 2026-03-09T20:55:12.105683+0000 (oldest deadline 2026-03-09T20:55:37.405178+0000) 2026-03-09T20:55:40.966 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:40 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:40.777+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:40.966 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:40 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:40.965+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:41.976 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:41 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:55:41.622+0000 7fdaabd22640 -1 osd.5 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:12.105555+0000 front 2026-03-09T20:55:12.105683+0000 (oldest deadline 2026-03-09T20:55:37.405178+0000) 2026-03-09T20:55:41.976 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:41 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:41.728+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:42.273 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:41 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:41.975+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:42.941 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:42 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:55:42.594+0000 7fdaabd22640 -1 osd.5 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:12.105555+0000 front 2026-03-09T20:55:12.105683+0000 (oldest deadline 2026-03-09T20:55:37.405178+0000) 2026-03-09T20:55:42.942 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:42 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:42.723+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:43.273 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:42 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:42.941+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:43.906 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:43 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:55:43.622+0000 7fdaabd22640 -1 osd.5 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:12.105555+0000 front 2026-03-09T20:55:12.105683+0000 (oldest deadline 2026-03-09T20:55:37.405178+0000) 2026-03-09T20:55:43.906 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:43 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5[64082]: 2026-03-09T20:55:43.622+0000 7fdaabd22640 -1 osd.5 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:17.405860+0000 front 2026-03-09T20:55:17.405638+0000 (oldest deadline 2026-03-09T20:55:43.305437+0000) 2026-03-09T20:55:43.906 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:43 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:43.746+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:44.236 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:43 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:43.905+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:44.498 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:44 vm09.local podman[91460]: 2026-03-09 20:55:44.235419094 +0000 UTC m=+5.056702370 container died ee2ecd66e88f75ce18875fb5f9c3e598c17beb82a8e4c07e7973f2f31b785210 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, ceph=True) 2026-03-09T20:55:44.498 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:44 vm09.local podman[91460]: 2026-03-09 20:55:44.364190822 +0000 UTC m=+5.185474098 container remove ee2ecd66e88f75ce18875fb5f9c3e598c17beb82a8e4c07e7973f2f31b785210 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default) 2026-03-09T20:55:44.498 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:44 vm09.local bash[91460]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5 2026-03-09T20:55:44.764 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:44 vm09.local podman[91536]: 2026-03-09 20:55:44.497951717 +0000 UTC m=+0.016729326 container create 39621254a7f233affabc0f87cd7e27dcac0bcc428b6ca362ad114d626ce025b9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, OSD_FLAVOR=default, io.buildah.version=1.41.3) 2026-03-09T20:55:44.764 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:44 vm09.local podman[91536]: 2026-03-09 20:55:44.539653885 +0000 UTC m=+0.058431495 container init 39621254a7f233affabc0f87cd7e27dcac0bcc428b6ca362ad114d626ce025b9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5-deactivate, org.label-schema.build-date=20260223, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=squid) 2026-03-09T20:55:44.764 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:44 vm09.local podman[91536]: 2026-03-09 20:55:44.544396152 +0000 UTC m=+0.063173761 container start 39621254a7f233affabc0f87cd7e27dcac0bcc428b6ca362ad114d626ce025b9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:55:44.765 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:44 vm09.local podman[91536]: 2026-03-09 20:55:44.545279085 +0000 UTC m=+0.064056684 container attach 39621254a7f233affabc0f87cd7e27dcac0bcc428b6ca362ad114d626ce025b9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5-deactivate, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T20:55:44.765 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:44 vm09.local podman[91536]: 2026-03-09 20:55:44.491039799 +0000 UTC m=+0.009817418 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:55:44.765 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 20:55:44 vm09.local podman[91536]: 2026-03-09 20:55:44.667535256 +0000 UTC m=+0.186312865 container died 39621254a7f233affabc0f87cd7e27dcac0bcc428b6ca362ad114d626ce025b9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-5-deactivate, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223) 2026-03-09T20:55:44.803 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.5.service' 2026-03-09T20:55:44.839 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:55:44.839 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-09T20:55:44.839 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-09T20:55:44.839 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.6 2026-03-09T20:55:45.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:44 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:44.762+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:45.023 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:44 vm09.local systemd[1]: Stopping Ceph osd.6 for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:45.023 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:44 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:44.954+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:45.023 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:44 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:44.954+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.610435+0000 front 2026-03-09T20:55:20.610439+0000 (oldest deadline 2026-03-09T20:55:44.710182+0000) 2026-03-09T20:55:45.023 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:44 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:44.978+0000 7f0c48da8640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:55:45.023 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:44 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:44.978+0000 7f0c48da8640 -1 osd.6 787 *** Got signal Terminated *** 2026-03-09T20:55:45.023 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:44 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:44.978+0000 7f0c48da8640 -1 osd.6 787 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:55:46.273 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:45 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:45.783+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:46.273 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:45 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:45.952+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:46.273 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:45 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:45.952+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.610435+0000 front 2026-03-09T20:55:20.610439+0000 (oldest deadline 2026-03-09T20:55:44.710182+0000) 2026-03-09T20:55:47.273 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:46.792+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:47.273 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:46.792+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.728077+0000 front 2026-03-09T20:55:20.728028+0000 (oldest deadline 2026-03-09T20:55:46.027699+0000) 2026-03-09T20:55:47.273 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:46.988+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:47.273 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:46 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:46.988+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.610435+0000 front 2026-03-09T20:55:20.610439+0000 (oldest deadline 2026-03-09T20:55:44.710182+0000) 2026-03-09T20:55:48.273 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:47 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:47.795+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:48.273 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:47 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:47.795+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.728077+0000 front 2026-03-09T20:55:20.728028+0000 (oldest deadline 2026-03-09T20:55:46.027699+0000) 2026-03-09T20:55:48.273 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:47 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:47.795+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-09T20:55:26.028241+0000 front 2026-03-09T20:55:26.028156+0000 (oldest deadline 2026-03-09T20:55:47.727949+0000) 2026-03-09T20:55:48.273 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:48 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:48.012+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:48.273 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:48 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:48.012+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.610435+0000 front 2026-03-09T20:55:20.610439+0000 (oldest deadline 2026-03-09T20:55:44.710182+0000) 2026-03-09T20:55:49.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:48 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:48.754+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:49.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:48 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:48.754+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.728077+0000 front 2026-03-09T20:55:20.728028+0000 (oldest deadline 2026-03-09T20:55:46.027699+0000) 2026-03-09T20:55:49.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:48 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:48.754+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-09T20:55:26.028241+0000 front 2026-03-09T20:55:26.028156+0000 (oldest deadline 2026-03-09T20:55:47.727949+0000) 2026-03-09T20:55:49.523 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:49 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:49.042+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.909651+0000 front 2026-03-09T20:55:11.909741+0000 (oldest deadline 2026-03-09T20:55:36.009283+0000) 2026-03-09T20:55:49.523 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:49 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6[69280]: 2026-03-09T20:55:49.042+0000 7f0c453c1640 -1 osd.6 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.610435+0000 front 2026-03-09T20:55:20.610439+0000 (oldest deadline 2026-03-09T20:55:44.710182+0000) 2026-03-09T20:55:50.022 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:49 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:49.758+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:50.022 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:49 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:49.758+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.728077+0000 front 2026-03-09T20:55:20.728028+0000 (oldest deadline 2026-03-09T20:55:46.027699+0000) 2026-03-09T20:55:50.022 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:49 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:49.758+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-09T20:55:26.028241+0000 front 2026-03-09T20:55:26.028156+0000 (oldest deadline 2026-03-09T20:55:47.727949+0000) 2026-03-09T20:55:50.274 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:50 vm09.local podman[91656]: 2026-03-09 20:55:50.021890286 +0000 UTC m=+5.057528466 container died f22b9b1e9a2d0587c43a05277cb20673c74498f9e08045eae201161cb2f8f263 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T20:55:50.274 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:50 vm09.local podman[91656]: 2026-03-09 20:55:50.151535429 +0000 UTC m=+5.187173609 container remove f22b9b1e9a2d0587c43a05277cb20673c74498f9e08045eae201161cb2f8f263 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T20:55:50.274 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:50 vm09.local bash[91656]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6 2026-03-09T20:55:50.636 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:50 vm09.local podman[91743]: 2026-03-09 20:55:50.335013956 +0000 UTC m=+0.020604249 container create 5142092a91e4b59b9e2dde42127b1bb0779dcbfd8ade12f942834ecfe6475d80 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223) 2026-03-09T20:55:50.636 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:50 vm09.local podman[91743]: 2026-03-09 20:55:50.381096895 +0000 UTC m=+0.066687188 container init 5142092a91e4b59b9e2dde42127b1bb0779dcbfd8ade12f942834ecfe6475d80 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T20:55:50.636 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:50 vm09.local podman[91743]: 2026-03-09 20:55:50.385316133 +0000 UTC m=+0.070906416 container start 5142092a91e4b59b9e2dde42127b1bb0779dcbfd8ade12f942834ecfe6475d80 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6-deactivate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, OSD_FLAVOR=default) 2026-03-09T20:55:50.636 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:50 vm09.local podman[91743]: 2026-03-09 20:55:50.386290026 +0000 UTC m=+0.071880319 container attach 5142092a91e4b59b9e2dde42127b1bb0779dcbfd8ade12f942834ecfe6475d80 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3) 2026-03-09T20:55:50.636 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:50 vm09.local podman[91743]: 2026-03-09 20:55:50.325297378 +0000 UTC m=+0.010887681 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:55:50.636 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 09 20:55:50 vm09.local podman[91743]: 2026-03-09 20:55:50.512487685 +0000 UTC m=+0.198077978 container died 5142092a91e4b59b9e2dde42127b1bb0779dcbfd8ade12f942834ecfe6475d80 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-6-deactivate, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0) 2026-03-09T20:55:50.656 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.6.service' 2026-03-09T20:55:50.696 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:55:50.696 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-09T20:55:50.696 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-09T20:55:50.696 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.7 2026-03-09T20:55:51.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:50 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:50.736+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:51.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:50 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:50.736+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.728077+0000 front 2026-03-09T20:55:20.728028+0000 (oldest deadline 2026-03-09T20:55:46.027699+0000) 2026-03-09T20:55:51.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:50 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:50.736+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-09T20:55:26.028241+0000 front 2026-03-09T20:55:26.028156+0000 (oldest deadline 2026-03-09T20:55:47.727949+0000) 2026-03-09T20:55:51.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:50 vm09.local systemd[1]: Stopping Ceph osd.7 for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:51.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:50 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:50.854+0000 7f070c52e640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:55:51.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:50 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:50.854+0000 7f070c52e640 -1 osd.7 787 *** Got signal Terminated *** 2026-03-09T20:55:51.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:50 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:50.854+0000 7f070c52e640 -1 osd.7 787 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:55:52.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:51 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:51.741+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:52.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:51 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:51.741+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.728077+0000 front 2026-03-09T20:55:20.728028+0000 (oldest deadline 2026-03-09T20:55:46.027699+0000) 2026-03-09T20:55:52.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:51 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:51.741+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-09T20:55:26.028241+0000 front 2026-03-09T20:55:26.028156+0000 (oldest deadline 2026-03-09T20:55:47.727949+0000) 2026-03-09T20:55:53.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:52 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:52.703+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:53.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:52 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:52.703+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.728077+0000 front 2026-03-09T20:55:20.728028+0000 (oldest deadline 2026-03-09T20:55:46.027699+0000) 2026-03-09T20:55:53.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:52 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:52.703+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-09T20:55:26.028241+0000 front 2026-03-09T20:55:26.028156+0000 (oldest deadline 2026-03-09T20:55:47.727949+0000) 2026-03-09T20:55:54.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:53 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:53.753+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:54.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:53 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:53.753+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.728077+0000 front 2026-03-09T20:55:20.728028+0000 (oldest deadline 2026-03-09T20:55:46.027699+0000) 2026-03-09T20:55:54.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:53 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:53.753+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-09T20:55:26.028241+0000 front 2026-03-09T20:55:26.028156+0000 (oldest deadline 2026-03-09T20:55:47.727949+0000) 2026-03-09T20:55:54.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:53 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:53.753+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6815 osd.3 since back 2026-03-09T20:55:28.228848+0000 front 2026-03-09T20:55:28.228718+0000 (oldest deadline 2026-03-09T20:55:53.528560+0000) 2026-03-09T20:55:55.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:54 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:54.735+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:55.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:54 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:54.735+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.728077+0000 front 2026-03-09T20:55:20.728028+0000 (oldest deadline 2026-03-09T20:55:46.027699+0000) 2026-03-09T20:55:55.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:54 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:54.735+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-09T20:55:26.028241+0000 front 2026-03-09T20:55:26.028156+0000 (oldest deadline 2026-03-09T20:55:47.727949+0000) 2026-03-09T20:55:55.023 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:54 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:54.735+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6815 osd.3 since back 2026-03-09T20:55:28.228848+0000 front 2026-03-09T20:55:28.228718+0000 (oldest deadline 2026-03-09T20:55:53.528560+0000) 2026-03-09T20:55:56.031 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:55 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:55.775+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-09T20:55:11.427251+0000 front 2026-03-09T20:55:11.427160+0000 (oldest deadline 2026-03-09T20:55:35.526914+0000) 2026-03-09T20:55:56.031 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:55 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:55.775+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-09T20:55:20.728077+0000 front 2026-03-09T20:55:20.728028+0000 (oldest deadline 2026-03-09T20:55:46.027699+0000) 2026-03-09T20:55:56.031 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:55 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:55.775+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-09T20:55:26.028241+0000 front 2026-03-09T20:55:26.028156+0000 (oldest deadline 2026-03-09T20:55:47.727949+0000) 2026-03-09T20:55:56.031 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:55 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7[74514]: 2026-03-09T20:55:55.775+0000 7f0708346640 -1 osd.7 787 heartbeat_check: no reply from 192.168.123.105:6815 osd.3 since back 2026-03-09T20:55:28.228848+0000 front 2026-03-09T20:55:28.228718+0000 (oldest deadline 2026-03-09T20:55:53.528560+0000) 2026-03-09T20:55:56.031 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:55 vm09.local podman[91866]: 2026-03-09 20:55:55.897367171 +0000 UTC m=+5.061165845 container died 4ec218b189ed4d9d2471ada2669ab00d14501640e5605fa681b24d17da833b16 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T20:55:56.338 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:56 vm09.local podman[91866]: 2026-03-09 20:55:56.031494352 +0000 UTC m=+5.195293026 container remove 4ec218b189ed4d9d2471ada2669ab00d14501640e5605fa681b24d17da833b16 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:55:56.338 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:56 vm09.local bash[91866]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7 2026-03-09T20:55:56.338 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:56 vm09.local podman[91943]: 2026-03-09 20:55:56.163611491 +0000 UTC m=+0.015691452 container create 0872f9139c075416de3216a51373413ac7c35aab607ac1c8008390c273be1315 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7-deactivate, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T20:55:56.338 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:56 vm09.local podman[91943]: 2026-03-09 20:55:56.203359943 +0000 UTC m=+0.055439914 container init 0872f9139c075416de3216a51373413ac7c35aab607ac1c8008390c273be1315 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:55:56.338 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:56 vm09.local podman[91943]: 2026-03-09 20:55:56.208623786 +0000 UTC m=+0.060703747 container start 0872f9139c075416de3216a51373413ac7c35aab607ac1c8008390c273be1315 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) 2026-03-09T20:55:56.338 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:56 vm09.local podman[91943]: 2026-03-09 20:55:56.215540853 +0000 UTC m=+0.067620824 container attach 0872f9139c075416de3216a51373413ac7c35aab607ac1c8008390c273be1315 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-osd-7-deactivate, org.label-schema.build-date=20260223, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:55:56.338 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:56 vm09.local podman[91943]: 2026-03-09 20:55:56.157343038 +0000 UTC m=+0.009423008 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:55:56.338 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 09 20:55:56 vm09.local conmon[91954]: conmon 0872f9139c075416de32 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0872f9139c075416de3216a51373413ac7c35aab607ac1c8008390c273be1315.scope/memory.events 2026-03-09T20:55:56.475 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@osd.7.service' 2026-03-09T20:55:56.511 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:55:56.511 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-09T20:55:56.511 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopping rgw.foo.a... 2026-03-09T20:55:56.511 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@rgw.foo.a 2026-03-09T20:55:56.910 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 09 20:55:56 vm05.local systemd[1]: Stopping Ceph rgw.foo.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:55:56.910 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 09 20:55:56 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-rgw-foo-a[86167]: 2026-03-09T20:55:56.616+0000 7f8e65c26640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/radosgw -n client.rgw.foo.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T20:55:56.910 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 09 20:55:56 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-rgw-foo-a[86167]: 2026-03-09T20:55:56.616+0000 7f8e69495980 -1 shutting down 2026-03-09T20:56:06.828 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@rgw.foo.a.service' 2026-03-09T20:56:06.860 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:56:06.860 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopped rgw.foo.a 2026-03-09T20:56:06.860 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-09T20:56:06.860 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@prometheus.a 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local systemd[1]: Stopping Ceph prometheus.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:56:06.965Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:56:06.965Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:56:06.965Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:56:06.965Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:56:06.965Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:56:06.965Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:56:06.965Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:56:06.965Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:56:06.966Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:56:06.968Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:56:06.968Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a[82716]: ts=2026-03-09T20:56:06.969Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:06 vm09.local podman[92064]: 2026-03-09 20:56:06.979182015 +0000 UTC m=+0.031345785 container died e765eb08e41565fe4fdfd1cf466c36aa2523847ead61f7564f78c307a223e230 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:07 vm09.local podman[92064]: 2026-03-09 20:56:07.097798724 +0000 UTC m=+0.149962494 container remove e765eb08e41565fe4fdfd1cf466c36aa2523847ead61f7564f78c307a223e230 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T20:56:07.146 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 09 20:56:07 vm09.local bash[92064]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-prometheus-a 2026-03-09T20:56:07.154 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@prometheus.a.service' 2026-03-09T20:56:07.184 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:56:07.184 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-09T20:56:07.184 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm rm-cluster --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea --force --keep-logs 2026-03-09T20:56:07.311 INFO:teuthology.orchestra.run.vm05.stdout:Deleting cluster with fsid: c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:56:08.910 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:56:08 vm05.local systemd[1]: Stopping Ceph alertmanager.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:56:08.910 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:56:08 vm05.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a[93634]: ts=2026-03-09T20:56:08.850Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T20:56:08.910 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:56:08 vm05.local podman[171831]: 2026-03-09 20:56:08.862798969 +0000 UTC m=+0.028174637 container died b433c0522983d3e565dd97caa875523fd403604be74f9407889fe705a6d8329e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T20:56:09.279 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:56:08 vm05.local podman[171831]: 2026-03-09 20:56:08.979302095 +0000 UTC m=+0.144677763 container remove b433c0522983d3e565dd97caa875523fd403604be74f9407889fe705a6d8329e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T20:56:09.279 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:56:08 vm05.local podman[171831]: 2026-03-09 20:56:08.980373961 +0000 UTC m=+0.145749629 volume remove 23a0b629cfbd1075cc2c1a143810684c31d0d6b4448f91d71f719ad3ac5cb866 2026-03-09T20:56:09.279 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:56:08 vm05.local bash[171831]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-alertmanager-a 2026-03-09T20:56:09.279 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:56:09 vm05.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@alertmanager.a.service: Deactivated successfully. 2026-03-09T20:56:09.279 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:56:09 vm05.local systemd[1]: Stopped Ceph alertmanager.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:56:09.280 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 09 20:56:09 vm05.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@alertmanager.a.service: Consumed 1.614s CPU time. 2026-03-09T20:56:09.280 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:56:09 vm05.local systemd[1]: Stopping Ceph node-exporter.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:56:09.660 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:56:09 vm05.local podman[171942]: 2026-03-09 20:56:09.280228975 +0000 UTC m=+0.016440725 container died e166fd129dd7132b9170740eb2da3e544c3c884893a368de6b95bcd42f2c7263 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T20:56:09.660 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:56:09 vm05.local podman[171942]: 2026-03-09 20:56:09.41330506 +0000 UTC m=+0.149516809 container remove e166fd129dd7132b9170740eb2da3e544c3c884893a368de6b95bcd42f2c7263 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T20:56:09.660 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:56:09 vm05.local bash[171942]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-a 2026-03-09T20:56:09.660 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:56:09 vm05.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-09T20:56:09.660 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:56:09 vm05.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-09T20:56:09.660 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:56:09 vm05.local systemd[1]: Stopped Ceph node-exporter.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:56:09.660 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 09 20:56:09 vm05.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@node-exporter.a.service: Consumed 2.354s CPU time. 2026-03-09T20:56:10.049 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm rm-cluster --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea --force --keep-logs 2026-03-09T20:56:10.173 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:56:11.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:56:11 vm09.local systemd[1]: Stopping Ceph iscsi.iscsi.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:56:11.523 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:56:11 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a[79398]: debug Shutdown received 2026-03-09T20:56:21.685 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:56:21 vm09.local bash[92494]: time="2026-03-09T20:56:21Z" level=warning msg="StopSignal SIGTERM failed to stop container ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a in 10 seconds, resorting to SIGKILL" 2026-03-09T20:56:21.686 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:56:21 vm09.local podman[92494]: 2026-03-09 20:56:21.401750406 +0000 UTC m=+10.041401161 container died 32c4c55b149612244236d3e5df1d169ce0b22d0e0eb31fa6da24d37596176732 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T20:56:21.686 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:56:21 vm09.local podman[92494]: 2026-03-09 20:56:21.529570001 +0000 UTC m=+10.169220756 container remove 32c4c55b149612244236d3e5df1d169ce0b22d0e0eb31fa6da24d37596176732 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2) 2026-03-09T20:56:21.686 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:56:21 vm09.local bash[92494]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-iscsi-iscsi-a 2026-03-09T20:56:21.686 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:56:21 vm09.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@iscsi.iscsi.a.service: Main process exited, code=exited, status=137/n/a 2026-03-09T20:56:21.686 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:56:21 vm09.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@iscsi.iscsi.a.service: Failed with result 'exit-code'. 2026-03-09T20:56:21.686 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:56:21 vm09.local systemd[1]: Stopped Ceph iscsi.iscsi.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:56:21.686 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 09 20:56:21 vm09.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@iscsi.iscsi.a.service: Consumed 2.441s CPU time. 2026-03-09T20:56:22.524 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:56:22 vm09.local systemd[1]: Stopping Ceph grafana.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:56:22.524 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:56:22 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=server t=2026-03-09T20:56:22.428686352Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-09T20:56:22.524 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:56:22 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=grafana-apiserver t=2026-03-09T20:56:22.429114803Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-09T20:56:22.524 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:56:22 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=tracing t=2026-03-09T20:56:22.429735525Z level=info msg="Closing tracing" 2026-03-09T20:56:22.524 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:56:22 vm09.local ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a[81662]: logger=ticker t=2026-03-09T20:56:22.430020589Z level=info msg=stopped last_tick=2026-03-09T20:56:20Z 2026-03-09T20:56:22.524 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:56:22 vm09.local podman[92761]: 2026-03-09 20:56:22.441142989 +0000 UTC m=+0.027305301 container died 82826c9f558ac40b47b5aceec014cbf22c07fed9ea3f3e656a517738f2d5cb8a (image=quay.io/ceph/grafana:10.4.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a, maintainer=Grafana Labs ) 2026-03-09T20:56:22.820 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:56:22 vm09.local systemd[1]: Stopping Ceph node-exporter.b for c0151936-1bf4-11f1-b896-23f7bea8a6ea... 2026-03-09T20:56:22.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:56:22 vm09.local podman[92761]: 2026-03-09 20:56:22.573786322 +0000 UTC m=+0.159948645 container remove 82826c9f558ac40b47b5aceec014cbf22c07fed9ea3f3e656a517738f2d5cb8a (image=quay.io/ceph/grafana:10.4.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a, maintainer=Grafana Labs ) 2026-03-09T20:56:22.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:56:22 vm09.local bash[92761]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-grafana-a 2026-03-09T20:56:22.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:56:22 vm09.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@grafana.a.service: Deactivated successfully. 2026-03-09T20:56:22.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:56:22 vm09.local systemd[1]: Stopped Ceph grafana.a for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:56:22.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 09 20:56:22 vm09.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@grafana.a.service: Consumed 11.245s CPU time. 2026-03-09T20:56:23.072 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:56:22 vm09.local podman[92869]: 2026-03-09 20:56:22.890424621 +0000 UTC m=+0.017973355 container died 52f5c2f42d47c9819e96a9ba283c101756e5943f967b4fca4d6bc53e61f281fa (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T20:56:23.076 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:56:23 vm09.local podman[92869]: 2026-03-09 20:56:23.007328093 +0000 UTC m=+0.134876826 container remove 52f5c2f42d47c9819e96a9ba283c101756e5943f967b4fca4d6bc53e61f281fa (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T20:56:23.076 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:56:23 vm09.local bash[92869]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea-node-exporter-b 2026-03-09T20:56:23.076 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:56:23 vm09.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-09T20:56:23.336 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:56:23 vm09.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-09T20:56:23.337 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:56:23 vm09.local systemd[1]: Stopped Ceph node-exporter.b for c0151936-1bf4-11f1-b896-23f7bea8a6ea. 2026-03-09T20:56:23.337 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 09 20:56:23 vm09.local systemd[1]: ceph-c0151936-1bf4-11f1-b896-23f7bea8a6ea@node-exporter.b.service: Consumed 2.347s CPU time. 2026-03-09T20:56:23.754 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:56:23.781 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:56:23.813 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T20:56:23.813 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/640/remote/vm05/crash 2026-03-09T20:56:23.813 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/crash -- . 2026-03-09T20:56:23.844 INFO:teuthology.orchestra.run.vm05.stderr:tar: /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/crash: Cannot open: No such file or directory 2026-03-09T20:56:23.844 INFO:teuthology.orchestra.run.vm05.stderr:tar: Error is not recoverable: exiting now 2026-03-09T20:56:23.846 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/640/remote/vm09/crash 2026-03-09T20:56:23.846 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/crash -- . 2026-03-09T20:56:23.885 INFO:teuthology.orchestra.run.vm09.stderr:tar: /var/lib/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/crash: Cannot open: No such file or directory 2026-03-09T20:56:23.886 INFO:teuthology.orchestra.run.vm09.stderr:tar: Error is not recoverable: exiting now 2026-03-09T20:56:23.887 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T20:56:23.887 DEBUG:teuthology.orchestra.run.vm05:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'reached quota' | egrep -v 'but it is still running' | egrep -v 'overall HEALTH_' | egrep -v '\(POOL_FULL\)' | egrep -v '\(SMALLER_PGP_NUM\)' | egrep -v '\(CACHE_POOL_NO_HIT_SET\)' | egrep -v '\(CACHE_POOL_NEAR_FULL\)' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v '\(PG_AVAILABILITY\)' | egrep -v '\(PG_DEGRADED\)' | egrep -v CEPHADM_STRAY_DAEMON | head -n 1 2026-03-09T20:56:23.922 INFO:tasks.cephadm:Compressing logs... 2026-03-09T20:56:23.923 DEBUG:teuthology.orchestra.run.vm05:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T20:56:23.964 DEBUG:teuthology.orchestra.run.vm09:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T20:56:23.989 INFO:teuthology.orchestra.run.vm05.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T20:56:23.990 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T20:56:23.990 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mon.a.log 2026-03-09T20:56:23.992 INFO:teuthology.orchestra.run.vm09.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T20:56:23.992 INFO:teuthology.orchestra.run.vm09.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T20:56:23.993 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.log 2026-03-09T20:56:23.994 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-volume.log 2026-03-09T20:56:23.994 INFO:teuthology.orchestra.run.vm05.stderr: 92.4% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T20:56:23.994 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mon.b.log 2026-03-09T20:56:23.994 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.audit.log 2026-03-09T20:56:23.999 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.cephadm.log 2026-03-09T20:56:23.999 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.log: 93.4% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.log.gz 2026-03-09T20:56:23.999 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mgr.y.log 2026-03-09T20:56:24.000 INFO:teuthology.orchestra.run.vm09.stderr: 91.2%/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mon.b.log: -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T20:56:24.003 INFO:teuthology.orchestra.run.vm09.stderr: 94.9% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-volume.log.gz 2026-03-09T20:56:24.004 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.log 2026-03-09T20:56:24.004 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.audit.log 2026-03-09T20:56:24.006 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.cephadm.log: /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.log: 79.9% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.cephadm.log.gz 2026-03-09T20:56:24.007 INFO:teuthology.orchestra.run.vm09.stderr: 88.2% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.log.gz 2026-03-09T20:56:24.007 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.cephadm.log 2026-03-09T20:56:24.010 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mgr.x.log 2026-03-09T20:56:24.010 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mgr.y.log: 95.2% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.audit.log.gz 2026-03-09T20:56:24.010 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.4.log 2026-03-09T20:56:24.012 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-volume.log 2026-03-09T20:56:24.012 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.cephadm.log: 88.6% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.cephadm.log.gz 2026-03-09T20:56:24.012 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mon.c.log 2026-03-09T20:56:24.013 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.0.log 2026-03-09T20:56:24.015 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.5.log 2026-03-09T20:56:24.016 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mgr.x.log: /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.4.log: 92.3% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph.audit.log.gz 2026-03-09T20:56:24.019 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mon.c.log: gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.1.log 2026-03-09T20:56:24.022 INFO:teuthology.orchestra.run.vm09.stderr: 92.5% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mgr.x.log.gz 2026-03-09T20:56:24.023 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.6.log 2026-03-09T20:56:24.023 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.0.log: 94.9% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-volume.log.gz 2026-03-09T20:56:24.025 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.7.log 2026-03-09T20:56:24.033 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.2.log 2026-03-09T20:56:24.037 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/tcmu-runner.log 2026-03-09T20:56:24.039 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.3.log 2026-03-09T20:56:24.042 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.7.log: /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/tcmu-runner.log: 63.7% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/tcmu-runner.log.gz 2026-03-09T20:56:24.046 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-client.rgw.foo.a.log 2026-03-09T20:56:24.168 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.3.log: /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-client.rgw.foo.a.log: 93.8% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-client.rgw.foo.a.log.gz 2026-03-09T20:56:24.817 INFO:teuthology.orchestra.run.vm05.stderr: 90.6% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mgr.y.log.gz 2026-03-09T20:56:26.193 INFO:teuthology.orchestra.run.vm09.stderr: 91.8% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mon.b.log.gz 2026-03-09T20:56:26.735 INFO:teuthology.orchestra.run.vm05.stderr: 92.4% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mon.c.log.gz 2026-03-09T20:56:28.774 INFO:teuthology.orchestra.run.vm05.stderr: 91.6% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-mon.a.log.gz 2026-03-09T20:56:34.891 INFO:teuthology.orchestra.run.vm09.stderr: 94.6% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.6.log.gz 2026-03-09T20:56:34.907 INFO:teuthology.orchestra.run.vm09.stderr: 94.6% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.5.log.gz 2026-03-09T20:56:34.958 INFO:teuthology.orchestra.run.vm05.stderr: 94.5% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.2.log.gz 2026-03-09T20:56:34.995 INFO:teuthology.orchestra.run.vm09.stderr: 94.5% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.7.log.gz 2026-03-09T20:56:35.089 INFO:teuthology.orchestra.run.vm09.stderr: 94.6% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.4.log.gz 2026-03-09T20:56:35.091 INFO:teuthology.orchestra.run.vm09.stderr: 2026-03-09T20:56:35.091 INFO:teuthology.orchestra.run.vm09.stderr:real 0m11.111s 2026-03-09T20:56:35.091 INFO:teuthology.orchestra.run.vm09.stderr:user 0m20.771s 2026-03-09T20:56:35.091 INFO:teuthology.orchestra.run.vm09.stderr:sys 0m1.193s 2026-03-09T20:56:35.628 INFO:teuthology.orchestra.run.vm05.stderr: 94.7% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.1.log.gz 2026-03-09T20:56:35.739 INFO:teuthology.orchestra.run.vm05.stderr: 94.7% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.0.log.gz 2026-03-09T20:56:35.823 INFO:teuthology.orchestra.run.vm05.stderr: 94.6% -- replaced with /var/log/ceph/c0151936-1bf4-11f1-b896-23f7bea8a6ea/ceph-osd.3.log.gz 2026-03-09T20:56:35.825 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-09T20:56:35.825 INFO:teuthology.orchestra.run.vm05.stderr:real 0m11.846s 2026-03-09T20:56:35.825 INFO:teuthology.orchestra.run.vm05.stderr:user 0m22.193s 2026-03-09T20:56:35.825 INFO:teuthology.orchestra.run.vm05.stderr:sys 0m1.340s 2026-03-09T20:56:35.826 INFO:tasks.cephadm:Archiving logs... 2026-03-09T20:56:35.826 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/640/remote/vm05/log 2026-03-09T20:56:35.826 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T20:56:36.932 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/640/remote/vm09/log 2026-03-09T20:56:36.932 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T20:56:37.931 INFO:tasks.cephadm:Removing cluster... 2026-03-09T20:56:37.931 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm rm-cluster --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea --force 2026-03-09T20:56:38.057 INFO:teuthology.orchestra.run.vm05.stdout:Deleting cluster with fsid: c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:56:38.373 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm rm-cluster --fsid c0151936-1bf4-11f1-b896-23f7bea8a6ea --force 2026-03-09T20:56:38.499 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: c0151936-1bf4-11f1-b896-23f7bea8a6ea 2026-03-09T20:56:38.854 INFO:tasks.cephadm:Teardown complete 2026-03-09T20:56:38.854 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-09T20:56:38.856 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-09T20:56:38.856 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T20:56:38.858 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T20:56:38.895 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-09T20:56:38.895 DEBUG:teuthology.orchestra.run.vm05:> 2026-03-09T20:56:38.895 DEBUG:teuthology.orchestra.run.vm05:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-09T20:56:38.895 DEBUG:teuthology.orchestra.run.vm05:> sudo yum -y remove $d || true 2026-03-09T20:56:38.895 DEBUG:teuthology.orchestra.run.vm05:> done 2026-03-09T20:56:38.900 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-09T20:56:38.901 DEBUG:teuthology.orchestra.run.vm09:> 2026-03-09T20:56:38.901 DEBUG:teuthology.orchestra.run.vm09:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-09T20:56:38.901 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y remove $d || true 2026-03-09T20:56:38.901 DEBUG:teuthology.orchestra.run.vm09:> done 2026-03-09T20:56:39.145 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout:Remove 2 Packages 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 39 M 2026-03-09T20:56:39.146 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T20:56:39.151 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T20:56:39.151 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T20:56:39.165 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T20:56:39.165 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T20:56:39.197 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T20:56:39.201 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout:Remove 2 Packages 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 39 M 2026-03-09T20:56:39.202 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T20:56:39.207 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T20:56:39.207 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T20:56:39.221 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T20:56:39.221 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:39.221 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T20:56:39.221 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-09T20:56:39.221 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-09T20:56:39.221 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:39.222 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T20:56:39.223 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T20:56:39.223 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T20:56:39.233 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T20:56:39.253 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T20:56:39.256 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T20:56:39.279 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T20:56:39.279 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:39.279 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T20:56:39.279 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-09T20:56:39.279 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-09T20:56:39.279 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:39.281 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T20:56:39.291 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T20:56:39.306 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T20:56:39.326 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T20:56:39.327 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T20:56:39.379 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T20:56:39.379 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T20:56:39.379 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T20:56:39.380 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:39.380 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T20:56:39.380 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-09T20:56:39.380 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:39.380 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:39.438 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T20:56:39.438 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:39.438 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T20:56:39.438 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-09T20:56:39.438 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:39.438 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:39.600 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout:Remove 4 Packages 2026-03-09T20:56:39.601 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:39.602 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 212 M 2026-03-09T20:56:39.602 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T20:56:39.604 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T20:56:39.605 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T20:56:39.629 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T20:56:39.630 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout:Remove 4 Packages 2026-03-09T20:56:39.645 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:39.646 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 212 M 2026-03-09T20:56:39.646 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T20:56:39.648 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T20:56:39.648 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T20:56:39.673 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T20:56:39.673 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T20:56:39.693 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T20:56:39.700 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T20:56:39.702 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-09T20:56:39.706 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-09T20:56:39.722 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T20:56:39.726 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T20:56:39.732 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T20:56:39.734 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-09T20:56:39.738 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-09T20:56:39.755 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T20:56:39.797 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T20:56:39.798 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T20:56:39.798 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-09T20:56:39.798 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-09T20:56:39.844 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-09T20:56:39.844 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:39.844 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T20:56:39.844 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-09T20:56:39.844 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T20:56:39.844 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:39.844 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:39.847 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T20:56:39.847 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T20:56:39.847 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-09T20:56:39.847 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-09T20:56:39.899 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-09T20:56:39.899 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:39.899 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T20:56:39.899 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-09T20:56:39.899 INFO:teuthology.orchestra.run.vm05.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T20:56:39.899 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:39.899 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:40.051 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:40.051 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout:Remove 8 Packages 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 28 M 2026-03-09T20:56:40.052 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T20:56:40.055 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T20:56:40.055 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T20:56:40.080 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T20:56:40.081 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T20:56:40.112 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout:Remove 8 Packages 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 28 M 2026-03-09T20:56:40.113 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T20:56:40.116 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T20:56:40.116 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T20:56:40.121 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T20:56:40.126 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T20:56:40.130 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-09T20:56:40.133 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-09T20:56:40.136 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-09T20:56:40.138 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-09T20:56:40.140 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-09T20:56:40.142 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T20:56:40.142 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T20:56:40.162 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T20:56:40.162 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:40.162 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T20:56:40.162 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-09T20:56:40.162 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-09T20:56:40.162 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:40.162 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T20:56:40.170 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T20:56:40.182 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T20:56:40.188 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T20:56:40.191 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-09T20:56:40.191 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T20:56:40.191 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:40.191 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T20:56:40.191 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-09T20:56:40.191 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-09T20:56:40.191 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:40.192 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T20:56:40.194 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-09T20:56:40.196 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-09T20:56:40.199 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-09T20:56:40.201 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-09T20:56:40.223 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T20:56:40.223 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:40.223 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T20:56:40.223 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-09T20:56:40.223 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-09T20:56:40.223 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:40.224 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T20:56:40.232 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T20:56:40.253 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T20:56:40.253 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:40.253 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T20:56:40.253 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-09T20:56:40.253 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-09T20:56:40.253 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:40.254 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T20:56:40.291 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T20:56:40.291 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T20:56:40.291 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-09T20:56:40.291 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-09T20:56:40.291 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-09T20:56:40.291 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-09T20:56:40.291 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-09T20:56:40.291 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-09T20:56:40.341 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-09T20:56:40.341 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:40.341 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T20:56:40.341 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:40.341 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:40.341 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:40.341 INFO:teuthology.orchestra.run.vm09.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T20:56:40.341 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T20:56:40.341 INFO:teuthology.orchestra.run.vm09.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T20:56:40.341 INFO:teuthology.orchestra.run.vm09.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T20:56:40.341 INFO:teuthology.orchestra.run.vm09.stdout: zip-3.0-35.el9.x86_64 2026-03-09T20:56:40.341 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:40.341 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:40.350 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T20:56:40.350 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T20:56:40.350 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-09T20:56:40.350 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-09T20:56:40.350 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-09T20:56:40.350 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-09T20:56:40.350 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-09T20:56:40.350 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-09T20:56:40.413 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-09T20:56:40.413 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:40.413 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T20:56:40.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:40.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:40.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:40.413 INFO:teuthology.orchestra.run.vm05.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T20:56:40.413 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T20:56:40.413 INFO:teuthology.orchestra.run.vm05.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T20:56:40.413 INFO:teuthology.orchestra.run.vm05.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T20:56:40.413 INFO:teuthology.orchestra.run.vm05.stdout: zip-3.0-35.el9.x86_64 2026-03-09T20:56:40.413 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:40.413 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:40.550 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-09T20:56:40.556 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-09T20:56:40.557 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout:Remove 102 Packages 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 613 M 2026-03-09T20:56:40.558 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T20:56:40.584 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T20:56:40.584 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T20:56:40.645 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout:=========================================================================================== 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout:=========================================================================================== 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-09T20:56:40.651 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:40.652 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T20:56:40.653 INFO:teuthology.orchestra.run.vm05.stdout:=========================================================================================== 2026-03-09T20:56:40.653 INFO:teuthology.orchestra.run.vm05.stdout:Remove 102 Packages 2026-03-09T20:56:40.653 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:40.653 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 613 M 2026-03-09T20:56:40.653 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T20:56:40.679 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T20:56:40.679 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T20:56:40.696 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T20:56:40.696 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T20:56:40.798 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T20:56:40.798 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T20:56:40.850 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T20:56:40.850 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-09T20:56:40.858 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-09T20:56:40.875 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-09T20:56:40.875 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:40.875 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T20:56:40.875 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-09T20:56:40.875 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-09T20:56:40.875 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:40.875 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-09T20:56:40.888 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-09T20:56:40.907 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-09T20:56:40.907 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-09T20:56:40.957 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T20:56:40.957 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-09T20:56:40.965 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-09T20:56:40.965 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-09T20:56:40.975 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-09T20:56:40.980 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-09T20:56:40.980 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-09T20:56:40.988 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-09T20:56:40.988 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:40.988 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T20:56:40.988 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-09T20:56:40.988 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-09T20:56:40.988 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:40.989 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-09T20:56:40.993 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-09T20:56:41.001 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-09T20:56:41.002 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-09T20:56:41.005 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-09T20:56:41.014 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-09T20:56:41.019 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-09T20:56:41.021 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-09T20:56:41.022 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-09T20:56:41.045 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-09T20:56:41.045 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:41.045 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T20:56:41.045 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-09T20:56:41.045 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-09T20:56:41.045 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:41.047 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-09T20:56:41.058 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-09T20:56:41.075 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-09T20:56:41.077 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-09T20:56:41.077 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:41.077 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T20:56:41.077 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:41.085 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-09T20:56:41.087 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-09T20:56:41.089 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-09T20:56:41.090 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-09T20:56:41.098 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-09T20:56:41.101 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-09T20:56:41.102 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-09T20:56:41.106 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-09T20:56:41.109 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-09T20:56:41.110 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-09T20:56:41.115 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-09T20:56:41.120 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-09T20:56:41.124 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-09T20:56:41.128 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-09T20:56:41.133 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-09T20:56:41.139 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-09T20:56:41.150 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-09T20:56:41.154 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-09T20:56:41.154 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:41.154 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T20:56:41.154 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-09T20:56:41.154 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-09T20:56:41.154 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:41.154 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-09T20:56:41.157 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-09T20:56:41.163 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-09T20:56:41.183 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-09T20:56:41.183 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:41.183 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T20:56:41.183 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:41.184 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-09T20:56:41.190 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-09T20:56:41.191 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-09T20:56:41.195 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-09T20:56:41.200 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-09T20:56:41.202 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-09T20:56:41.203 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-09T20:56:41.207 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-09T20:56:41.211 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-09T20:56:41.211 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-09T20:56:41.212 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-09T20:56:41.219 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-09T20:56:41.222 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-09T20:56:41.258 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-09T20:56:41.305 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-09T20:56:41.306 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-09T20:56:41.316 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-09T20:56:41.323 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-09T20:56:41.324 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-09T20:56:41.337 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-09T20:56:41.337 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-09T20:56:41.337 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:41.338 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-09T20:56:41.351 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-09T20:56:41.358 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-09T20:56:41.361 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-09T20:56:41.366 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-09T20:56:41.369 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-09T20:56:41.377 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-09T20:56:41.377 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-09T20:56:41.382 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-09T20:56:41.386 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-09T20:56:41.389 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-09T20:56:41.393 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-09T20:56:41.396 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-09T20:56:41.416 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-09T20:56:41.416 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:41.416 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T20:56:41.416 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-09T20:56:41.416 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-09T20:56:41.416 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:41.417 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-09T20:56:41.428 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-09T20:56:41.433 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-09T20:56:41.436 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-09T20:56:41.439 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-09T20:56:41.443 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-09T20:56:41.446 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-09T20:56:41.451 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-09T20:56:41.455 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-09T20:56:41.483 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-09T20:56:41.499 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-09T20:56:41.506 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-09T20:56:41.512 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-09T20:56:41.512 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-09T20:56:41.512 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:41.513 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-09T20:56:41.518 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-09T20:56:41.522 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-09T20:56:41.525 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-09T20:56:41.527 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-09T20:56:41.530 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-09T20:56:41.533 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-09T20:56:41.542 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-09T20:56:41.557 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-09T20:56:41.557 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:41.557 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T20:56:41.557 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:41.557 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-09T20:56:41.558 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-09T20:56:41.564 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-09T20:56:41.566 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-09T20:56:41.567 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-09T20:56:41.567 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-09T20:56:41.569 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-09T20:56:41.570 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-09T20:56:41.573 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-09T20:56:41.575 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-09T20:56:41.578 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-09T20:56:41.581 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-09T20:56:41.584 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-09T20:56:41.588 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-09T20:56:41.590 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-09T20:56:41.590 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:41.590 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T20:56:41.590 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-09T20:56:41.590 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-09T20:56:41.590 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:41.591 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-09T20:56:41.596 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-09T20:56:41.601 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-09T20:56:41.603 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-09T20:56:41.603 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-09T20:56:41.606 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-09T20:56:41.607 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-09T20:56:41.609 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-09T20:56:41.610 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-09T20:56:41.612 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-09T20:56:41.614 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-09T20:56:41.615 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-09T20:56:41.618 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-09T20:56:41.619 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-09T20:56:41.623 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-09T20:56:41.623 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-09T20:56:41.627 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-09T20:56:41.627 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-09T20:56:41.633 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-09T20:56:41.637 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-09T20:56:41.640 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-09T20:56:41.642 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-09T20:56:41.648 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-09T20:56:41.652 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-09T20:56:41.655 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-09T20:56:41.664 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-09T20:56:41.670 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-09T20:56:41.673 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-09T20:56:41.674 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-09T20:56:41.676 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-09T20:56:41.678 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-09T20:56:41.684 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-09T20:56:41.685 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-09T20:56:41.688 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-09T20:56:41.688 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-09T20:56:41.691 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-09T20:56:41.693 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-09T20:56:41.696 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-09T20:56:41.701 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-09T20:56:41.711 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-09T20:56:41.711 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-09T20:56:41.711 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:41.717 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-09T20:56:41.723 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-09T20:56:41.723 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T20:56:41.723 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T20:56:41.723 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:41.723 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-09T20:56:41.732 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-09T20:56:41.734 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-09T20:56:41.736 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-09T20:56:41.739 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-09T20:56:41.741 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-09T20:56:41.742 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-09T20:56:41.742 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-09T20:56:41.743 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-09T20:56:41.745 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-09T20:56:41.748 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-09T20:56:41.751 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-09T20:56:41.756 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-09T20:56:41.759 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-09T20:56:41.761 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-09T20:56:41.764 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-09T20:56:41.764 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-09T20:56:41.766 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-09T20:56:41.766 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-09T20:56:41.766 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-09T20:56:41.769 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-09T20:56:41.772 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-09T20:56:41.778 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-09T20:56:41.782 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-09T20:56:41.787 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-09T20:56:41.791 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-09T20:56:41.797 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-09T20:56:41.800 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-09T20:56:41.804 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-09T20:56:41.806 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-09T20:56:41.812 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-09T20:56:41.816 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-09T20:56:41.819 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-09T20:56:41.827 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-09T20:56:41.833 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-09T20:56:41.837 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-09T20:56:41.839 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-09T20:56:41.841 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-09T20:56:41.847 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-09T20:56:41.851 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-09T20:56:41.871 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-09T20:56:41.871 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-09T20:56:41.871 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:41.877 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-09T20:56:41.905 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-09T20:56:41.905 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-09T20:56:41.916 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-09T20:56:41.922 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-09T20:56:41.924 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-09T20:56:41.926 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-09T20:56:41.926 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-09T20:56:47.313 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-09T20:56:47.313 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /sys 2026-03-09T20:56:47.313 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /proc 2026-03-09T20:56:47.313 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /mnt 2026-03-09T20:56:47.313 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /var/tmp 2026-03-09T20:56:47.313 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /home 2026-03-09T20:56:47.313 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /root 2026-03-09T20:56:47.313 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /tmp 2026-03-09T20:56:47.313 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:47.325 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-09T20:56:47.345 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-09T20:56:47.345 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-09T20:56:47.353 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-09T20:56:47.356 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-09T20:56:47.359 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-09T20:56:47.361 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-09T20:56:47.364 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-09T20:56:47.364 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-09T20:56:47.378 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-09T20:56:47.380 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-09T20:56:47.383 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-09T20:56:47.385 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-09T20:56:47.388 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-09T20:56:47.394 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-09T20:56:47.401 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-09T20:56:47.406 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-09T20:56:47.406 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-09T20:56:47.502 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-09T20:56:47.502 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /sys 2026-03-09T20:56:47.502 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /proc 2026-03-09T20:56:47.502 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /mnt 2026-03-09T20:56:47.502 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /var/tmp 2026-03-09T20:56:47.502 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /home 2026-03-09T20:56:47.502 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /root 2026-03-09T20:56:47.502 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /tmp 2026-03-09T20:56:47.502 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-09T20:56:47.510 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-09T20:56:47.511 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-09T20:56:47.512 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-09T20:56:47.513 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-09T20:56:47.532 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-09T20:56:47.532 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-09T20:56:47.543 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-09T20:56:47.545 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-09T20:56:47.548 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-09T20:56:47.551 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-09T20:56:47.553 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-09T20:56:47.553 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-09T20:56:47.567 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-09T20:56:47.569 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-09T20:56:47.572 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-09T20:56:47.576 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-09T20:56:47.578 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-09T20:56:47.584 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-09T20:56:47.592 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-09T20:56:47.597 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-09T20:56:47.597 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T20:56:47.607 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-09T20:56:47.608 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:47.609 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-09T20:56:47.709 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-09T20:56:47.710 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-09T20:56:47.711 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T20:56:47.792 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T20:56:47.793 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:47.794 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:47.818 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:47.819 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:47.819 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T20:56:47.819 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:47.819 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T20:56:47.819 INFO:teuthology.orchestra.run.vm09.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-09T20:56:47.819 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:47.819 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T20:56:47.819 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:47.819 INFO:teuthology.orchestra.run.vm09.stdout:Remove 1 Package 2026-03-09T20:56:47.819 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:47.819 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 775 k 2026-03-09T20:56:47.819 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T20:56:47.821 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T20:56:47.821 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T20:56:47.822 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T20:56:47.822 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T20:56:47.839 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T20:56:47.839 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T20:56:47.946 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T20:56:47.987 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T20:56:47.987 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:47.987 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T20:56:47.987 INFO:teuthology.orchestra.run.vm09.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:47.987 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:47.987 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:48.009 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:48.009 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:48.009 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T20:56:48.009 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:48.009 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T20:56:48.009 INFO:teuthology.orchestra.run.vm05.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-09T20:56:48.009 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:48.009 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T20:56:48.009 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:48.009 INFO:teuthology.orchestra.run.vm05.stdout:Remove 1 Package 2026-03-09T20:56:48.009 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:48.009 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 775 k 2026-03-09T20:56:48.009 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T20:56:48.011 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T20:56:48.011 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T20:56:48.012 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T20:56:48.013 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T20:56:48.029 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T20:56:48.029 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T20:56:48.157 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T20:56:48.175 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-immutable-object-cache 2026-03-09T20:56:48.175 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:48.178 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:48.179 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:48.179 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:48.202 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T20:56:48.202 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:48.202 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T20:56:48.202 INFO:teuthology.orchestra.run.vm05.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T20:56:48.202 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:48.202 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:48.347 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr 2026-03-09T20:56:48.347 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:48.351 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:48.351 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:48.351 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:48.381 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-immutable-object-cache 2026-03-09T20:56:48.381 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:48.384 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:48.385 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:48.385 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:48.512 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-dashboard 2026-03-09T20:56:48.513 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:48.516 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:48.516 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:48.516 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:48.559 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr 2026-03-09T20:56:48.560 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:48.564 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:48.564 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:48.564 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:48.682 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-09T20:56:48.682 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:48.686 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:48.686 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:48.686 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:48.730 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-dashboard 2026-03-09T20:56:48.730 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:48.733 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:48.734 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:48.734 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:48.854 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-rook 2026-03-09T20:56:48.854 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:48.857 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:48.858 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:48.858 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:48.908 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-09T20:56:48.908 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:48.911 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:48.912 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:48.912 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:49.022 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-cephadm 2026-03-09T20:56:49.022 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:49.025 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:49.026 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:49.026 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:49.075 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-rook 2026-03-09T20:56:49.075 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:49.078 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:49.079 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:49.079 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:49.199 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:49.199 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:49.199 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T20:56:49.199 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:49.199 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T20:56:49.199 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-09T20:56:49.199 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:49.200 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T20:56:49.200 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:49.200 INFO:teuthology.orchestra.run.vm09.stdout:Remove 1 Package 2026-03-09T20:56:49.200 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:49.200 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 3.6 M 2026-03-09T20:56:49.200 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T20:56:49.201 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T20:56:49.201 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T20:56:49.210 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T20:56:49.210 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T20:56:49.238 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-cephadm 2026-03-09T20:56:49.239 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:49.242 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:49.242 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:49.242 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:49.262 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T20:56:49.276 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T20:56:49.343 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T20:56:49.394 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T20:56:49.394 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:49.394 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T20:56:49.394 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:49.395 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:49.395 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:49.417 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:49.417 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:49.417 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T20:56:49.418 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:49.418 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T20:56:49.418 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-09T20:56:49.418 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:49.418 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T20:56:49.418 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:49.418 INFO:teuthology.orchestra.run.vm05.stdout:Remove 1 Package 2026-03-09T20:56:49.418 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:49.418 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 3.6 M 2026-03-09T20:56:49.418 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T20:56:49.419 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T20:56:49.420 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T20:56:49.429 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T20:56:49.429 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T20:56:49.453 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T20:56:49.468 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T20:56:49.531 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T20:56:49.571 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T20:56:49.571 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:49.571 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T20:56:49.571 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:49.571 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:49.572 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:49.583 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-volume 2026-03-09T20:56:49.583 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:49.586 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:49.587 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:49.587 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:49.748 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-volume 2026-03-09T20:56:49.748 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:49.751 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:49.752 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:49.752 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:49.769 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:49.769 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:49.769 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repo Size 2026-03-09T20:56:49.769 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:49.769 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T20:56:49.769 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-09T20:56:49.769 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-09T20:56:49.769 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-09T20:56:49.769 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:49.769 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T20:56:49.769 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:49.769 INFO:teuthology.orchestra.run.vm09.stdout:Remove 2 Packages 2026-03-09T20:56:49.769 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:49.770 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 610 k 2026-03-09T20:56:49.770 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T20:56:49.771 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T20:56:49.771 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T20:56:49.782 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T20:56:49.782 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T20:56:49.806 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T20:56:49.808 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T20:56:49.822 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T20:56:49.884 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T20:56:49.884 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T20:56:49.924 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T20:56:49.924 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:49.924 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T20:56:49.924 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:49.924 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:49.924 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:49.924 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:49.937 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:49.937 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:49.937 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repo Size 2026-03-09T20:56:49.937 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:49.937 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T20:56:49.937 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-09T20:56:49.937 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-09T20:56:49.937 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-09T20:56:49.937 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:49.937 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T20:56:49.937 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:49.938 INFO:teuthology.orchestra.run.vm05.stdout:Remove 2 Packages 2026-03-09T20:56:49.938 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:49.938 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 610 k 2026-03-09T20:56:49.938 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T20:56:49.939 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T20:56:49.940 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T20:56:49.950 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T20:56:49.950 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T20:56:49.978 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T20:56:49.980 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T20:56:49.994 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T20:56:50.048 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T20:56:50.048 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T20:56:50.101 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T20:56:50.101 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:50.101 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T20:56:50.101 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.101 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.101 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:50.101 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:50.117 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repo Size 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout:Remove 3 Packages 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 3.7 M 2026-03-09T20:56:50.118 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T20:56:50.120 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T20:56:50.120 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T20:56:50.136 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T20:56:50.137 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T20:56:50.170 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T20:56:50.172 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T20:56:50.173 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T20:56:50.173 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T20:56:50.236 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T20:56:50.236 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T20:56:50.236 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T20:56:50.274 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T20:56:50.274 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:50.274 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T20:56:50.275 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.275 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.275 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.275 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:50.275 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:50.303 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repo Size 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout:Remove 3 Packages 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 3.7 M 2026-03-09T20:56:50.304 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T20:56:50.306 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T20:56:50.306 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T20:56:50.322 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T20:56:50.322 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T20:56:50.353 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T20:56:50.355 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T20:56:50.357 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T20:56:50.357 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T20:56:50.425 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T20:56:50.425 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T20:56:50.425 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T20:56:50.446 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: libcephfs-devel 2026-03-09T20:56:50.446 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:50.449 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:50.450 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:50.450 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:50.471 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T20:56:50.472 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:50.472 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T20:56:50.472 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.472 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.472 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.472 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:50.472 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:50.631 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout:Remove 20 Packages 2026-03-09T20:56:50.632 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:50.633 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 79 M 2026-03-09T20:56:50.633 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T20:56:50.636 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T20:56:50.636 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T20:56:50.650 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: libcephfs-devel 2026-03-09T20:56:50.651 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:50.654 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:50.655 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:50.655 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:50.660 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T20:56:50.660 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T20:56:50.706 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T20:56:50.724 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-09T20:56:50.727 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-09T20:56:50.729 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-09T20:56:50.729 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T20:56:50.743 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T20:56:50.746 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-09T20:56:50.748 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-09T20:56:50.750 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T20:56:50.751 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-09T20:56:50.754 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-09T20:56:50.754 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T20:56:50.769 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T20:56:50.769 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T20:56:50.769 INFO:teuthology.orchestra.run.vm09.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-09T20:56:50.769 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:50.783 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T20:56:50.785 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-09T20:56:50.789 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-09T20:56:50.792 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-09T20:56:50.796 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-09T20:56:50.798 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-09T20:56:50.801 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-09T20:56:50.803 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-09T20:56:50.805 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-09T20:56:50.819 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T20:56:50.845 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout:Remove 20 Packages 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 79 M 2026-03-09T20:56:50.847 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T20:56:50.851 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T20:56:50.851 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T20:56:50.875 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T20:56:50.875 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-09T20:56:50.885 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-09T20:56:50.919 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T20:56:50.922 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-09T20:56:50.924 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-09T20:56:50.927 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-09T20:56:50.927 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T20:56:50.929 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:50.940 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T20:56:50.943 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-09T20:56:50.945 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-09T20:56:50.946 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T20:56:50.948 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-09T20:56:50.951 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-09T20:56:50.951 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T20:56:50.966 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T20:56:50.966 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T20:56:50.966 INFO:teuthology.orchestra.run.vm05.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-09T20:56:50.966 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:50.981 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T20:56:50.984 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-09T20:56:50.988 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-09T20:56:50.992 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-09T20:56:50.995 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-09T20:56:50.999 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-09T20:56:51.002 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-09T20:56:51.004 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-09T20:56:51.007 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-09T20:56:51.022 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-09T20:56:51.093 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-09T20:56:51.145 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: librbd1 2026-03-09T20:56:51.145 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:51.148 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:51.148 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:51.148 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T20:56:51.154 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:51.374 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rados 2026-03-09T20:56:51.375 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:51.377 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:51.377 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:51.377 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:51.387 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: librbd1 2026-03-09T20:56:51.387 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:51.389 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:51.390 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:51.390 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:51.569 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-rados 2026-03-09T20:56:51.570 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:51.572 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:51.573 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:51.573 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:51.596 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rgw 2026-03-09T20:56:51.597 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:51.599 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:51.599 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:51.599 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:51.775 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-rgw 2026-03-09T20:56:51.775 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:51.777 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:51.778 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:51.778 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:51.799 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-cephfs 2026-03-09T20:56:51.799 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:51.801 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:51.801 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:51.801 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:51.960 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-cephfs 2026-03-09T20:56:51.960 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:51.962 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:51.963 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:51.963 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:51.995 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rbd 2026-03-09T20:56:51.995 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:51.997 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:51.997 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:51.997 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:52.130 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-rbd 2026-03-09T20:56:52.130 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:52.132 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:52.133 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:52.133 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:52.162 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-fuse 2026-03-09T20:56:52.162 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:52.164 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:52.164 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:52.164 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:52.301 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: rbd-fuse 2026-03-09T20:56:52.302 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:52.304 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:52.304 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:52.304 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:52.331 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-mirror 2026-03-09T20:56:52.331 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:52.333 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:52.334 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:52.334 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:52.471 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: rbd-mirror 2026-03-09T20:56:52.471 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:52.473 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:52.474 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:52.474 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:52.505 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-nbd 2026-03-09T20:56:52.505 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T20:56:52.507 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T20:56:52.508 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T20:56:52.508 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T20:56:52.531 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean all 2026-03-09T20:56:52.644 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: rbd-nbd 2026-03-09T20:56:52.644 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T20:56:52.646 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T20:56:52.647 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T20:56:52.647 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T20:56:52.655 INFO:teuthology.orchestra.run.vm09.stdout:56 files removed 2026-03-09T20:56:52.673 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean all 2026-03-09T20:56:52.677 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T20:56:52.702 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean expire-cache 2026-03-09T20:56:52.797 INFO:teuthology.orchestra.run.vm05.stdout:56 files removed 2026-03-09T20:56:52.817 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T20:56:52.841 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean expire-cache 2026-03-09T20:56:52.860 INFO:teuthology.orchestra.run.vm09.stdout:Cache was expired 2026-03-09T20:56:52.860 INFO:teuthology.orchestra.run.vm09.stdout:0 files removed 2026-03-09T20:56:52.880 DEBUG:teuthology.parallel:result is None 2026-03-09T20:56:52.993 INFO:teuthology.orchestra.run.vm05.stdout:Cache was expired 2026-03-09T20:56:52.993 INFO:teuthology.orchestra.run.vm05.stdout:0 files removed 2026-03-09T20:56:53.012 DEBUG:teuthology.parallel:result is None 2026-03-09T20:56:53.013 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm05.local 2026-03-09T20:56:53.013 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm09.local 2026-03-09T20:56:53.013 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T20:56:53.013 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T20:56:53.039 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-09T20:56:53.040 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-09T20:56:53.105 DEBUG:teuthology.parallel:result is None 2026-03-09T20:56:53.110 DEBUG:teuthology.parallel:result is None 2026-03-09T20:56:53.110 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T20:56:53.112 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T20:56:53.113 DEBUG:teuthology.orchestra.run.vm05:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T20:56:53.147 DEBUG:teuthology.orchestra.run.vm09:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T20:56:53.161 INFO:teuthology.orchestra.run.vm05.stderr:bash: line 1: ntpq: command not found 2026-03-09T20:56:53.166 INFO:teuthology.orchestra.run.vm05.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T20:56:53.167 INFO:teuthology.orchestra.run.vm05.stdout:=============================================================================== 2026-03-09T20:56:53.167 INFO:teuthology.orchestra.run.vm05.stdout:^+ sonne.floppy.org 2 8 377 157 -23us[ -35us] +/- 49ms 2026-03-09T20:56:53.167 INFO:teuthology.orchestra.run.vm05.stdout:^* de.relay.mahi.be 3 6 377 25 -153us[ -160us] +/- 19ms 2026-03-09T20:56:53.167 INFO:teuthology.orchestra.run.vm05.stdout:^+ 212.132.108.186 2 8 377 156 -123us[ -135us] +/- 45ms 2026-03-09T20:56:53.167 INFO:teuthology.orchestra.run.vm05.stdout:^+ 185.252.140.125 2 8 377 154 +213us[ +201us] +/- 20ms 2026-03-09T20:56:53.168 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-09T20:56:53.173 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T20:56:53.173 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-09T20:56:53.173 INFO:teuthology.orchestra.run.vm09.stdout:^* de.relay.mahi.be 3 6 377 29 -158us[ -164us] +/- 19ms 2026-03-09T20:56:53.173 INFO:teuthology.orchestra.run.vm09.stdout:^+ 212.132.108.186 2 8 377 165 -164us[ -174us] +/- 44ms 2026-03-09T20:56:53.173 INFO:teuthology.orchestra.run.vm09.stdout:^+ 185.252.140.125 2 8 377 155 +185us[ +180us] +/- 20ms 2026-03-09T20:56:53.173 INFO:teuthology.orchestra.run.vm09.stdout:^+ sonne.floppy.org 2 8 377 93 +50us[ +44us] +/- 50ms 2026-03-09T20:56:53.173 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T20:56:53.175 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T20:56:53.175 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T20:56:53.177 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T20:56:53.179 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T20:56:53.181 INFO:teuthology.task.internal:Duration was 2685.625366 seconds 2026-03-09T20:56:53.181 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T20:56:53.183 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T20:56:53.183 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T20:56:53.209 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T20:56:53.249 INFO:teuthology.orchestra.run.vm05.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T20:56:53.254 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T20:56:53.551 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T20:56:53.551 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm05.local 2026-03-09T20:56:53.551 DEBUG:teuthology.orchestra.run.vm05:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T20:56:53.577 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm09.local 2026-03-09T20:56:53.578 DEBUG:teuthology.orchestra.run.vm09:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T20:56:53.619 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T20:56:53.619 DEBUG:teuthology.orchestra.run.vm05:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T20:56:53.620 DEBUG:teuthology.orchestra.run.vm09:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T20:56:54.179 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T20:56:54.179 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T20:56:54.181 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T20:56:54.203 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T20:56:54.203 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T20:56:54.204 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T20:56:54.204 INFO:teuthology.orchestra.run.vm05.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T20:56:54.204 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T20:56:54.208 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T20:56:54.209 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T20:56:54.209 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip -5 --verbose -- 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T20:56:54.209 INFO:teuthology.orchestra.run.vm09.stderr: /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T20:56:54.209 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T20:56:54.344 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.9% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T20:56:54.359 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.6% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T20:56:54.361 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T20:56:54.363 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T20:56:54.364 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T20:56:54.428 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T20:56:54.456 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T20:56:54.458 DEBUG:teuthology.orchestra.run.vm05:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T20:56:54.470 DEBUG:teuthology.orchestra.run.vm09:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T20:56:54.502 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = core 2026-03-09T20:56:54.523 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = core 2026-03-09T20:56:54.535 DEBUG:teuthology.orchestra.run.vm05:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T20:56:54.574 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:56:54.574 DEBUG:teuthology.orchestra.run.vm09:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T20:56:54.590 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:56:54.590 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T20:56:54.593 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T20:56:54.593 DEBUG:teuthology.misc:Transferring archived files from vm05:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/640/remote/vm05 2026-03-09T20:56:54.593 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T20:56:54.653 DEBUG:teuthology.misc:Transferring archived files from vm09:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/640/remote/vm09 2026-03-09T20:56:54.653 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T20:56:54.686 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T20:56:54.686 DEBUG:teuthology.orchestra.run.vm05:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T20:56:54.692 DEBUG:teuthology.orchestra.run.vm09:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T20:56:54.741 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T20:56:54.743 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T20:56:54.743 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T20:56:54.746 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T20:56:54.746 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T20:56:54.753 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T20:56:54.772 INFO:teuthology.orchestra.run.vm05.stdout: 8532152 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 9 20:56 /home/ubuntu/cephtest 2026-03-09T20:56:54.798 INFO:teuthology.orchestra.run.vm09.stdout: 8532147 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 9 20:56 /home/ubuntu/cephtest 2026-03-09T20:56:54.799 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T20:56:54.805 INFO:teuthology.run:Summary data: description: orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} duration: 2685.625365972519 flavor: default owner: kyr success: true 2026-03-09T20:56:54.805 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T20:56:54.822 INFO:teuthology.run:pass